00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2382 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3643 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.068 Using shallow fetch with depth 1 00:00:00.068 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.068 > git --version # timeout=10 00:00:00.080 > git --version # 'git version 2.39.2' 00:00:00.080 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.093 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.093 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.248 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.260 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.260 > git config core.sparsecheckout # timeout=10 00:00:03.272 > git read-tree -mu HEAD # timeout=10 00:00:03.284 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.302 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.302 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.407 [Pipeline] Start of Pipeline 00:00:03.421 [Pipeline] library 00:00:03.423 Loading library shm_lib@master 00:00:03.423 Library shm_lib@master is cached. Copying from home. 00:00:03.440 [Pipeline] node 00:00:03.453 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:03.454 [Pipeline] { 00:00:03.462 [Pipeline] catchError 00:00:03.463 [Pipeline] { 00:00:03.473 [Pipeline] wrap 00:00:03.482 [Pipeline] { 00:00:03.490 [Pipeline] stage 00:00:03.492 [Pipeline] { (Prologue) 00:00:03.508 [Pipeline] echo 00:00:03.509 Node: VM-host-SM9 00:00:03.513 [Pipeline] cleanWs 00:00:03.520 [WS-CLEANUP] Deleting project workspace... 00:00:03.520 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.528 [WS-CLEANUP] done 00:00:03.716 [Pipeline] setCustomBuildProperty 00:00:03.794 [Pipeline] httpRequest 00:00:04.268 [Pipeline] echo 00:00:04.270 Sorcerer 10.211.164.20 is alive 00:00:04.278 [Pipeline] retry 00:00:04.280 [Pipeline] { 00:00:04.294 [Pipeline] httpRequest 00:00:04.298 HttpMethod: GET 00:00:04.299 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.299 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.300 Response Code: HTTP/1.1 200 OK 00:00:04.300 Success: Status code 200 is in the accepted range: 200,404 00:00:04.301 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.446 [Pipeline] } 00:00:04.460 [Pipeline] // retry 00:00:04.467 [Pipeline] sh 00:00:04.741 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.760 [Pipeline] httpRequest 00:00:05.084 [Pipeline] echo 00:00:05.091 Sorcerer 10.211.164.20 is alive 00:00:05.100 [Pipeline] retry 00:00:05.103 [Pipeline] { 00:00:05.118 [Pipeline] httpRequest 00:00:05.122 HttpMethod: GET 00:00:05.122 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.123 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.125 Response Code: HTTP/1.1 200 OK 00:00:05.125 Success: Status code 200 is in the accepted range: 200,404 00:00:05.125 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:25.643 [Pipeline] } 00:00:25.661 [Pipeline] // retry 00:00:25.669 [Pipeline] sh 00:00:25.949 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:28.493 [Pipeline] sh 00:00:28.773 + git -C spdk log --oneline -n5 00:00:28.773 c13c99a5e test: Various fixes for Fedora40 00:00:28.773 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:28.773 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:28.773 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:28.773 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:28.792 [Pipeline] writeFile 00:00:28.807 [Pipeline] sh 00:00:29.091 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.105 [Pipeline] sh 00:00:29.391 + cat autorun-spdk.conf 00:00:29.391 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.391 SPDK_TEST_NVMF=1 00:00:29.391 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.391 SPDK_TEST_URING=1 00:00:29.391 SPDK_TEST_VFIOUSER=1 00:00:29.391 SPDK_TEST_USDT=1 00:00:29.391 SPDK_RUN_UBSAN=1 00:00:29.391 NET_TYPE=virt 00:00:29.391 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.399 RUN_NIGHTLY=1 00:00:29.401 [Pipeline] } 00:00:29.415 [Pipeline] // stage 00:00:29.431 [Pipeline] stage 00:00:29.433 [Pipeline] { (Run VM) 00:00:29.446 [Pipeline] sh 00:00:29.728 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:29.728 + echo 'Start stage prepare_nvme.sh' 00:00:29.728 Start stage prepare_nvme.sh 00:00:29.728 + [[ -n 4 ]] 00:00:29.728 + disk_prefix=ex4 00:00:29.728 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:29.728 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:29.728 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:29.728 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.728 ++ SPDK_TEST_NVMF=1 00:00:29.728 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.728 ++ SPDK_TEST_URING=1 00:00:29.728 ++ SPDK_TEST_VFIOUSER=1 00:00:29.728 ++ SPDK_TEST_USDT=1 00:00:29.728 ++ SPDK_RUN_UBSAN=1 00:00:29.728 ++ NET_TYPE=virt 00:00:29.728 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.728 ++ RUN_NIGHTLY=1 00:00:29.728 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:29.728 + nvme_files=() 00:00:29.728 + declare -A nvme_files 00:00:29.728 + backend_dir=/var/lib/libvirt/images/backends 00:00:29.728 + nvme_files['nvme.img']=5G 00:00:29.728 + nvme_files['nvme-cmb.img']=5G 00:00:29.728 + nvme_files['nvme-multi0.img']=4G 00:00:29.729 + nvme_files['nvme-multi1.img']=4G 00:00:29.729 + nvme_files['nvme-multi2.img']=4G 00:00:29.729 + nvme_files['nvme-openstack.img']=8G 00:00:29.729 + nvme_files['nvme-zns.img']=5G 00:00:29.729 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:29.729 + (( SPDK_TEST_FTL == 1 )) 00:00:29.729 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:29.729 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:29.729 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.729 + for nvme in "${!nvme_files[@]}" 00:00:29.729 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:29.987 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.988 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:29.988 + echo 'End stage prepare_nvme.sh' 00:00:29.988 End stage prepare_nvme.sh 00:00:29.999 [Pipeline] sh 00:00:30.280 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.280 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:30.280 00:00:30.280 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:30.280 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:30.280 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:30.280 HELP=0 00:00:30.280 DRY_RUN=0 00:00:30.280 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:30.280 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.280 NVME_AUTO_CREATE=0 00:00:30.280 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:30.280 NVME_CMB=,, 00:00:30.280 NVME_PMR=,, 00:00:30.280 NVME_ZNS=,, 00:00:30.280 NVME_MS=,, 00:00:30.280 NVME_FDP=,, 00:00:30.280 SPDK_VAGRANT_DISTRO=fedora39 00:00:30.280 SPDK_VAGRANT_VMCPU=10 00:00:30.280 SPDK_VAGRANT_VMRAM=12288 00:00:30.280 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.280 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.280 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.280 SPDK_OPENSTACK_NETWORK=0 00:00:30.280 VAGRANT_PACKAGE_BOX=0 00:00:30.280 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:30.280 FORCE_DISTRO=true 00:00:30.280 VAGRANT_BOX_VERSION= 00:00:30.280 EXTRA_VAGRANTFILES= 00:00:30.280 NIC_MODEL=e1000 00:00:30.280 00:00:30.280 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:30.280 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:32.811 Bringing machine 'default' up with 'libvirt' provider... 00:00:33.380 ==> default: Creating image (snapshot of base box volume). 00:00:33.701 ==> default: Creating domain with the following settings... 00:00:33.701 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731952731_ea80a0cf3f292f75adb9 00:00:33.701 ==> default: -- Domain type: kvm 00:00:33.701 ==> default: -- Cpus: 10 00:00:33.701 ==> default: -- Feature: acpi 00:00:33.701 ==> default: -- Feature: apic 00:00:33.701 ==> default: -- Feature: pae 00:00:33.701 ==> default: -- Memory: 12288M 00:00:33.701 ==> default: -- Memory Backing: hugepages: 00:00:33.701 ==> default: -- Management MAC: 00:00:33.701 ==> default: -- Loader: 00:00:33.701 ==> default: -- Nvram: 00:00:33.701 ==> default: -- Base box: spdk/fedora39 00:00:33.701 ==> default: -- Storage pool: default 00:00:33.701 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731952731_ea80a0cf3f292f75adb9.img (20G) 00:00:33.701 ==> default: -- Volume Cache: default 00:00:33.701 ==> default: -- Kernel: 00:00:33.701 ==> default: -- Initrd: 00:00:33.701 ==> default: -- Graphics Type: vnc 00:00:33.701 ==> default: -- Graphics Port: -1 00:00:33.701 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.701 ==> default: -- Graphics Password: Not defined 00:00:33.701 ==> default: -- Video Type: cirrus 00:00:33.701 ==> default: -- Video VRAM: 9216 00:00:33.701 ==> default: -- Sound Type: 00:00:33.701 ==> default: -- Keymap: en-us 00:00:33.701 ==> default: -- TPM Path: 00:00:33.701 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.701 ==> default: -- Command line args: 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:33.701 ==> default: -> value=-drive, 00:00:33.701 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:00:33.701 ==> default: -> value=-drive, 00:00:33.701 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.701 ==> default: -> value=-drive, 00:00:33.701 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.701 ==> default: -> value=-drive, 00:00:33.701 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:33.701 ==> default: -> value=-device, 00:00:33.701 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.701 ==> default: Creating shared folders metadata... 00:00:33.701 ==> default: Starting domain. 00:00:35.079 ==> default: Waiting for domain to get an IP address... 00:00:53.168 ==> default: Waiting for SSH to become available... 00:00:53.168 ==> default: Configuring and enabling network interfaces... 00:00:55.704 default: SSH address: 192.168.121.241:22 00:00:55.704 default: SSH username: vagrant 00:00:55.704 default: SSH auth method: private key 00:00:58.238 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:06.359 ==> default: Mounting SSHFS shared folder... 00:01:06.927 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:06.927 ==> default: Checking Mount.. 00:01:08.303 ==> default: Folder Successfully Mounted! 00:01:08.303 ==> default: Running provisioner: file... 00:01:09.238 default: ~/.gitconfig => .gitconfig 00:01:09.496 00:01:09.496 SUCCESS! 00:01:09.496 00:01:09.496 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:09.496 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:09.496 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:09.496 00:01:09.504 [Pipeline] } 00:01:09.519 [Pipeline] // stage 00:01:09.528 [Pipeline] dir 00:01:09.529 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:09.530 [Pipeline] { 00:01:09.543 [Pipeline] catchError 00:01:09.545 [Pipeline] { 00:01:09.557 [Pipeline] sh 00:01:09.835 + vagrant ssh-config --host vagrant 00:01:09.835 + sed -ne /^Host/,$p 00:01:09.835 + tee ssh_conf 00:01:13.125 Host vagrant 00:01:13.125 HostName 192.168.121.241 00:01:13.125 User vagrant 00:01:13.125 Port 22 00:01:13.125 UserKnownHostsFile /dev/null 00:01:13.125 StrictHostKeyChecking no 00:01:13.125 PasswordAuthentication no 00:01:13.125 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:13.125 IdentitiesOnly yes 00:01:13.125 LogLevel FATAL 00:01:13.125 ForwardAgent yes 00:01:13.125 ForwardX11 yes 00:01:13.125 00:01:13.139 [Pipeline] withEnv 00:01:13.141 [Pipeline] { 00:01:13.154 [Pipeline] sh 00:01:13.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:13.434 source /etc/os-release 00:01:13.434 [[ -e /image.version ]] && img=$(< /image.version) 00:01:13.434 # Minimal, systemd-like check. 00:01:13.434 if [[ -e /.dockerenv ]]; then 00:01:13.434 # Clear garbage from the node's name: 00:01:13.434 # agt-er_autotest_547-896 -> autotest_547-896 00:01:13.434 # $HOSTNAME is the actual container id 00:01:13.434 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:13.434 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:13.434 # We can assume this is a mount from a host where container is running, 00:01:13.434 # so fetch its hostname to easily identify the target swarm worker. 00:01:13.434 container="$(< /etc/hostname) ($agent)" 00:01:13.434 else 00:01:13.434 # Fallback 00:01:13.434 container=$agent 00:01:13.434 fi 00:01:13.434 fi 00:01:13.434 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:13.434 00:01:13.705 [Pipeline] } 00:01:13.721 [Pipeline] // withEnv 00:01:13.729 [Pipeline] setCustomBuildProperty 00:01:13.744 [Pipeline] stage 00:01:13.746 [Pipeline] { (Tests) 00:01:13.764 [Pipeline] sh 00:01:14.045 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:14.317 [Pipeline] sh 00:01:14.599 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.873 [Pipeline] timeout 00:01:14.873 Timeout set to expire in 1 hr 0 min 00:01:14.875 [Pipeline] { 00:01:14.889 [Pipeline] sh 00:01:15.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:15.791 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:15.803 [Pipeline] sh 00:01:16.084 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:16.356 [Pipeline] sh 00:01:16.640 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.915 [Pipeline] sh 00:01:17.199 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:17.458 ++ readlink -f spdk_repo 00:01:17.458 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:17.458 + [[ -n /home/vagrant/spdk_repo ]] 00:01:17.458 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:17.458 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:17.458 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:17.458 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:17.458 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:17.458 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:17.458 + cd /home/vagrant/spdk_repo 00:01:17.458 + source /etc/os-release 00:01:17.458 ++ NAME='Fedora Linux' 00:01:17.458 ++ VERSION='39 (Cloud Edition)' 00:01:17.458 ++ ID=fedora 00:01:17.458 ++ VERSION_ID=39 00:01:17.458 ++ VERSION_CODENAME= 00:01:17.458 ++ PLATFORM_ID=platform:f39 00:01:17.458 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.458 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.458 ++ LOGO=fedora-logo-icon 00:01:17.458 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.458 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.458 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.458 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.458 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.458 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.458 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.458 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.458 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.458 ++ SUPPORT_END=2024-11-12 00:01:17.458 ++ VARIANT='Cloud Edition' 00:01:17.458 ++ VARIANT_ID=cloud 00:01:17.458 + uname -a 00:01:17.458 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.458 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:17.458 Hugepages 00:01:17.458 node hugesize free / total 00:01:17.458 node0 1048576kB 0 / 0 00:01:17.458 node0 2048kB 0 / 0 00:01:17.458 00:01:17.458 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:17.458 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:17.458 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:17.458 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:17.458 + rm -f /tmp/spdk-ld-path 00:01:17.458 + source autorun-spdk.conf 00:01:17.458 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.458 ++ SPDK_TEST_NVMF=1 00:01:17.458 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.458 ++ SPDK_TEST_URING=1 00:01:17.458 ++ SPDK_TEST_VFIOUSER=1 00:01:17.458 ++ SPDK_TEST_USDT=1 00:01:17.458 ++ SPDK_RUN_UBSAN=1 00:01:17.458 ++ NET_TYPE=virt 00:01:17.458 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.458 ++ RUN_NIGHTLY=1 00:01:17.458 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:17.458 + [[ -n '' ]] 00:01:17.458 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:17.717 + for M in /var/spdk/build-*-manifest.txt 00:01:17.717 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:17.717 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.718 + for M in /var/spdk/build-*-manifest.txt 00:01:17.718 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:17.718 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.718 + for M in /var/spdk/build-*-manifest.txt 00:01:17.718 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:17.718 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:17.718 ++ uname 00:01:17.718 + [[ Linux == \L\i\n\u\x ]] 00:01:17.718 + sudo dmesg -T 00:01:17.718 + sudo dmesg --clear 00:01:17.718 + dmesg_pid=5231 00:01:17.718 + [[ Fedora Linux == FreeBSD ]] 00:01:17.718 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.718 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:17.718 + sudo dmesg -Tw 00:01:17.718 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:17.718 + [[ -x /usr/src/fio-static/fio ]] 00:01:17.718 + export FIO_BIN=/usr/src/fio-static/fio 00:01:17.718 + FIO_BIN=/usr/src/fio-static/fio 00:01:17.718 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:17.718 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:17.718 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:17.718 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.718 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:17.718 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:17.718 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.718 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:17.718 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:17.718 Test configuration: 00:01:17.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.718 SPDK_TEST_NVMF=1 00:01:17.718 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.718 SPDK_TEST_URING=1 00:01:17.718 SPDK_TEST_VFIOUSER=1 00:01:17.718 SPDK_TEST_USDT=1 00:01:17.718 SPDK_RUN_UBSAN=1 00:01:17.718 NET_TYPE=virt 00:01:17.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.718 RUN_NIGHTLY=1 17:59:36 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:17.718 17:59:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.718 17:59:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.718 17:59:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.718 17:59:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.718 17:59:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.718 17:59:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.718 17:59:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.718 17:59:36 -- paths/export.sh@5 -- $ export PATH 00:01:17.718 17:59:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:17.718 17:59:36 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.718 17:59:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:17.718 17:59:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731952776.XXXXXX 00:01:17.718 17:59:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731952776.r9ffwa 00:01:17.718 17:59:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:17.718 17:59:36 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:17.718 17:59:36 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.718 17:59:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.718 17:59:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.718 17:59:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:17.718 17:59:36 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:17.718 17:59:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.718 17:59:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:17.718 17:59:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.718 17:59:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.718 17:59:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.718 17:59:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.718 Mon Nov 18 05:59:36 PM UTC 2024 00:01:17.718 17:59:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.718 LTS-67-gc13c99a5e 00:01:17.718 17:59:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.718 17:59:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.718 17:59:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.718 17:59:36 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:17.718 17:59:36 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:17.718 17:59:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.718 ************************************ 00:01:17.718 START TEST ubsan 00:01:17.718 ************************************ 00:01:17.718 using ubsan 00:01:17.718 17:59:36 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:17.718 00:01:17.718 real 0m0.000s 00:01:17.718 user 0m0.000s 00:01:17.718 sys 0m0.000s 00:01:17.718 17:59:36 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:17.718 17:59:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.718 ************************************ 00:01:17.718 END TEST ubsan 00:01:17.718 ************************************ 00:01:17.977 17:59:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.977 17:59:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.977 17:59:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.977 17:59:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:18.235 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:18.235 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.494 Using 'verbs' RDMA provider 00:01:33.944 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:46.190 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:46.190 Creating mk/config.mk...done. 00:01:46.190 Creating mk/cc.flags.mk...done. 00:01:46.190 Type 'make' to build. 00:01:46.190 18:00:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:46.190 18:00:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:46.190 18:00:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.190 18:00:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.190 ************************************ 00:01:46.190 START TEST make 00:01:46.190 ************************************ 00:01:46.190 18:00:03 -- common/autotest_common.sh@1114 -- $ make -j10 00:01:46.190 make[1]: Nothing to be done for 'all'. 00:01:46.448 The Meson build system 00:01:46.448 Version: 1.5.0 00:01:46.448 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:01:46.448 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:46.448 Build type: native build 00:01:46.448 Project name: libvfio-user 00:01:46.448 Project version: 0.0.1 00:01:46.448 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.448 C linker for the host machine: cc ld.bfd 2.40-14 00:01:46.448 Host machine cpu family: x86_64 00:01:46.448 Host machine cpu: x86_64 00:01:46.448 Run-time dependency threads found: YES 00:01:46.448 Library dl found: YES 00:01:46.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.448 Run-time dependency json-c found: YES 0.17 00:01:46.448 Run-time dependency cmocka found: YES 1.1.7 00:01:46.448 Program pytest-3 found: NO 00:01:46.448 Program flake8 found: NO 00:01:46.448 Program misspell-fixer found: NO 00:01:46.448 Program restructuredtext-lint found: NO 00:01:46.448 Program valgrind found: YES (/usr/bin/valgrind) 00:01:46.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.448 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.448 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.448 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:01:46.448 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:01:46.448 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:46.448 Build targets in project: 8 00:01:46.448 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:46.448 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:46.448 00:01:46.448 libvfio-user 0.0.1 00:01:46.448 00:01:46.448 User defined options 00:01:46.448 buildtype : debug 00:01:46.448 default_library: shared 00:01:46.448 libdir : /usr/local/lib 00:01:46.448 00:01:46.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.707 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:46.965 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:46.965 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:46.965 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:46.965 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:46.965 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:46.966 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:46.966 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:46.966 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:46.966 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:46.966 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.224 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.224 [12/37] Compiling C object samples/null.p/null.c.o 00:01:47.224 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.224 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.224 [15/37] Compiling C object samples/server.p/server.c.o 00:01:47.224 [16/37] Compiling C object samples/client.p/client.c.o 00:01:47.224 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.224 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.224 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.224 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.224 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.224 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.224 [23/37] Linking target samples/client 00:01:47.224 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.224 [25/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.224 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.224 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.224 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.224 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.224 [30/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.482 [31/37] Linking target test/unit_tests 00:01:47.482 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:47.482 [33/37] Linking target samples/server 00:01:47.482 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:47.482 [35/37] Linking target samples/lspci 00:01:47.482 [36/37] Linking target samples/gpio-pci-idio-16 00:01:47.482 [37/37] Linking target samples/null 00:01:47.482 INFO: autodetecting backend as ninja 00:01:47.482 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:47.482 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:01:48.049 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:01:48.049 ninja: no work to do. 00:01:58.020 The Meson build system 00:01:58.020 Version: 1.5.0 00:01:58.020 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:58.020 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:58.020 Build type: native build 00:01:58.020 Program cat found: YES (/usr/bin/cat) 00:01:58.020 Project name: DPDK 00:01:58.020 Project version: 23.11.0 00:01:58.020 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.020 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.020 Host machine cpu family: x86_64 00:01:58.020 Host machine cpu: x86_64 00:01:58.020 Message: ## Building in Developer Mode ## 00:01:58.020 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.020 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.020 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.020 Program python3 found: YES (/usr/bin/python3) 00:01:58.020 Program cat found: YES (/usr/bin/cat) 00:01:58.020 Compiler for C supports arguments -march=native: YES 00:01:58.020 Checking for size of "void *" : 8 00:01:58.020 Checking for size of "void *" : 8 (cached) 00:01:58.020 Library m found: YES 00:01:58.020 Library numa found: YES 00:01:58.020 Has header "numaif.h" : YES 00:01:58.020 Library fdt found: NO 00:01:58.020 Library execinfo found: NO 00:01:58.020 Has header "execinfo.h" : YES 00:01:58.020 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.020 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.021 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.021 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.021 Run-time dependency openssl found: YES 3.1.1 00:01:58.021 Run-time dependency libpcap found: YES 1.10.4 00:01:58.021 Has header "pcap.h" with dependency libpcap: YES 00:01:58.021 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.021 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.021 Compiler for C supports arguments -Wformat: YES 00:01:58.021 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.021 Compiler for C supports arguments -Wformat-security: NO 00:01:58.021 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.021 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.021 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.021 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.021 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.021 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.021 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.021 Compiler for C supports arguments -Wundef: YES 00:01:58.021 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.021 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.021 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.021 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.021 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.021 Program objdump found: YES (/usr/bin/objdump) 00:01:58.021 Compiler for C supports arguments -mavx512f: YES 00:01:58.021 Checking if "AVX512 checking" compiles: YES 00:01:58.021 Fetching value of define "__SSE4_2__" : 1 00:01:58.021 Fetching value of define "__AES__" : 1 00:01:58.021 Fetching value of define "__AVX__" : 1 00:01:58.021 Fetching value of define "__AVX2__" : 1 00:01:58.021 Fetching value of define "__AVX512BW__" : (undefined) 00:01:58.021 Fetching value of define "__AVX512CD__" : (undefined) 00:01:58.021 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:58.021 Fetching value of define "__AVX512F__" : (undefined) 00:01:58.021 Fetching value of define "__AVX512VL__" : (undefined) 00:01:58.021 Fetching value of define "__PCLMUL__" : 1 00:01:58.021 Fetching value of define "__RDRND__" : 1 00:01:58.021 Fetching value of define "__RDSEED__" : 1 00:01:58.021 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.021 Fetching value of define "__znver1__" : (undefined) 00:01:58.021 Fetching value of define "__znver2__" : (undefined) 00:01:58.021 Fetching value of define "__znver3__" : (undefined) 00:01:58.021 Fetching value of define "__znver4__" : (undefined) 00:01:58.021 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.021 Message: lib/log: Defining dependency "log" 00:01:58.021 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.021 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.021 Checking for function "getentropy" : NO 00:01:58.021 Message: lib/eal: Defining dependency "eal" 00:01:58.021 Message: lib/ring: Defining dependency "ring" 00:01:58.021 Message: lib/rcu: Defining dependency "rcu" 00:01:58.021 Message: lib/mempool: Defining dependency "mempool" 00:01:58.021 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.021 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.021 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.021 Compiler for C supports arguments -mpclmul: YES 00:01:58.021 Compiler for C supports arguments -maes: YES 00:01:58.021 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.021 Compiler for C supports arguments -mavx512bw: YES 00:01:58.021 Compiler for C supports arguments -mavx512dq: YES 00:01:58.021 Compiler for C supports arguments -mavx512vl: YES 00:01:58.021 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.021 Compiler for C supports arguments -mavx2: YES 00:01:58.021 Compiler for C supports arguments -mavx: YES 00:01:58.021 Message: lib/net: Defining dependency "net" 00:01:58.021 Message: lib/meter: Defining dependency "meter" 00:01:58.021 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.021 Message: lib/pci: Defining dependency "pci" 00:01:58.021 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.021 Message: lib/hash: Defining dependency "hash" 00:01:58.021 Message: lib/timer: Defining dependency "timer" 00:01:58.021 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.021 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.021 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.021 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.021 Message: lib/power: Defining dependency "power" 00:01:58.021 Message: lib/reorder: Defining dependency "reorder" 00:01:58.021 Message: lib/security: Defining dependency "security" 00:01:58.021 Has header "linux/userfaultfd.h" : YES 00:01:58.021 Has header "linux/vduse.h" : YES 00:01:58.021 Message: lib/vhost: Defining dependency "vhost" 00:01:58.021 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.021 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.021 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.021 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.021 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.021 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.021 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.021 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.021 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.021 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.021 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.021 Configuring doxy-api-html.conf using configuration 00:01:58.021 Configuring doxy-api-man.conf using configuration 00:01:58.021 Program mandb found: YES (/usr/bin/mandb) 00:01:58.021 Program sphinx-build found: NO 00:01:58.021 Configuring rte_build_config.h using configuration 00:01:58.021 Message: 00:01:58.021 ================= 00:01:58.021 Applications Enabled 00:01:58.021 ================= 00:01:58.021 00:01:58.021 apps: 00:01:58.021 00:01:58.021 00:01:58.021 Message: 00:01:58.021 ================= 00:01:58.021 Libraries Enabled 00:01:58.021 ================= 00:01:58.021 00:01:58.021 libs: 00:01:58.021 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.021 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.021 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.021 00:01:58.021 Message: 00:01:58.021 =============== 00:01:58.021 Drivers Enabled 00:01:58.021 =============== 00:01:58.021 00:01:58.021 common: 00:01:58.021 00:01:58.021 bus: 00:01:58.021 pci, vdev, 00:01:58.021 mempool: 00:01:58.021 ring, 00:01:58.021 dma: 00:01:58.021 00:01:58.021 net: 00:01:58.021 00:01:58.021 crypto: 00:01:58.021 00:01:58.021 compress: 00:01:58.021 00:01:58.021 vdpa: 00:01:58.021 00:01:58.021 00:01:58.021 Message: 00:01:58.021 ================= 00:01:58.021 Content Skipped 00:01:58.021 ================= 00:01:58.021 00:01:58.021 apps: 00:01:58.021 dumpcap: explicitly disabled via build config 00:01:58.021 graph: explicitly disabled via build config 00:01:58.021 pdump: explicitly disabled via build config 00:01:58.021 proc-info: explicitly disabled via build config 00:01:58.021 test-acl: explicitly disabled via build config 00:01:58.021 test-bbdev: explicitly disabled via build config 00:01:58.021 test-cmdline: explicitly disabled via build config 00:01:58.021 test-compress-perf: explicitly disabled via build config 00:01:58.021 test-crypto-perf: explicitly disabled via build config 00:01:58.021 test-dma-perf: explicitly disabled via build config 00:01:58.021 test-eventdev: explicitly disabled via build config 00:01:58.021 test-fib: explicitly disabled via build config 00:01:58.021 test-flow-perf: explicitly disabled via build config 00:01:58.022 test-gpudev: explicitly disabled via build config 00:01:58.022 test-mldev: explicitly disabled via build config 00:01:58.022 test-pipeline: explicitly disabled via build config 00:01:58.022 test-pmd: explicitly disabled via build config 00:01:58.022 test-regex: explicitly disabled via build config 00:01:58.022 test-sad: explicitly disabled via build config 00:01:58.022 test-security-perf: explicitly disabled via build config 00:01:58.022 00:01:58.022 libs: 00:01:58.022 metrics: explicitly disabled via build config 00:01:58.022 acl: explicitly disabled via build config 00:01:58.022 bbdev: explicitly disabled via build config 00:01:58.022 bitratestats: explicitly disabled via build config 00:01:58.022 bpf: explicitly disabled via build config 00:01:58.022 cfgfile: explicitly disabled via build config 00:01:58.022 distributor: explicitly disabled via build config 00:01:58.022 efd: explicitly disabled via build config 00:01:58.022 eventdev: explicitly disabled via build config 00:01:58.022 dispatcher: explicitly disabled via build config 00:01:58.022 gpudev: explicitly disabled via build config 00:01:58.022 gro: explicitly disabled via build config 00:01:58.022 gso: explicitly disabled via build config 00:01:58.022 ip_frag: explicitly disabled via build config 00:01:58.022 jobstats: explicitly disabled via build config 00:01:58.022 latencystats: explicitly disabled via build config 00:01:58.022 lpm: explicitly disabled via build config 00:01:58.022 member: explicitly disabled via build config 00:01:58.022 pcapng: explicitly disabled via build config 00:01:58.022 rawdev: explicitly disabled via build config 00:01:58.022 regexdev: explicitly disabled via build config 00:01:58.022 mldev: explicitly disabled via build config 00:01:58.022 rib: explicitly disabled via build config 00:01:58.022 sched: explicitly disabled via build config 00:01:58.022 stack: explicitly disabled via build config 00:01:58.022 ipsec: explicitly disabled via build config 00:01:58.022 pdcp: explicitly disabled via build config 00:01:58.022 fib: explicitly disabled via build config 00:01:58.022 port: explicitly disabled via build config 00:01:58.022 pdump: explicitly disabled via build config 00:01:58.022 table: explicitly disabled via build config 00:01:58.022 pipeline: explicitly disabled via build config 00:01:58.022 graph: explicitly disabled via build config 00:01:58.022 node: explicitly disabled via build config 00:01:58.022 00:01:58.022 drivers: 00:01:58.022 common/cpt: not in enabled drivers build config 00:01:58.022 common/dpaax: not in enabled drivers build config 00:01:58.022 common/iavf: not in enabled drivers build config 00:01:58.022 common/idpf: not in enabled drivers build config 00:01:58.022 common/mvep: not in enabled drivers build config 00:01:58.022 common/octeontx: not in enabled drivers build config 00:01:58.022 bus/auxiliary: not in enabled drivers build config 00:01:58.022 bus/cdx: not in enabled drivers build config 00:01:58.022 bus/dpaa: not in enabled drivers build config 00:01:58.022 bus/fslmc: not in enabled drivers build config 00:01:58.022 bus/ifpga: not in enabled drivers build config 00:01:58.022 bus/platform: not in enabled drivers build config 00:01:58.022 bus/vmbus: not in enabled drivers build config 00:01:58.022 common/cnxk: not in enabled drivers build config 00:01:58.022 common/mlx5: not in enabled drivers build config 00:01:58.022 common/nfp: not in enabled drivers build config 00:01:58.022 common/qat: not in enabled drivers build config 00:01:58.022 common/sfc_efx: not in enabled drivers build config 00:01:58.022 mempool/bucket: not in enabled drivers build config 00:01:58.022 mempool/cnxk: not in enabled drivers build config 00:01:58.022 mempool/dpaa: not in enabled drivers build config 00:01:58.022 mempool/dpaa2: not in enabled drivers build config 00:01:58.022 mempool/octeontx: not in enabled drivers build config 00:01:58.022 mempool/stack: not in enabled drivers build config 00:01:58.022 dma/cnxk: not in enabled drivers build config 00:01:58.022 dma/dpaa: not in enabled drivers build config 00:01:58.022 dma/dpaa2: not in enabled drivers build config 00:01:58.022 dma/hisilicon: not in enabled drivers build config 00:01:58.022 dma/idxd: not in enabled drivers build config 00:01:58.022 dma/ioat: not in enabled drivers build config 00:01:58.022 dma/skeleton: not in enabled drivers build config 00:01:58.022 net/af_packet: not in enabled drivers build config 00:01:58.022 net/af_xdp: not in enabled drivers build config 00:01:58.022 net/ark: not in enabled drivers build config 00:01:58.022 net/atlantic: not in enabled drivers build config 00:01:58.022 net/avp: not in enabled drivers build config 00:01:58.022 net/axgbe: not in enabled drivers build config 00:01:58.022 net/bnx2x: not in enabled drivers build config 00:01:58.022 net/bnxt: not in enabled drivers build config 00:01:58.022 net/bonding: not in enabled drivers build config 00:01:58.022 net/cnxk: not in enabled drivers build config 00:01:58.022 net/cpfl: not in enabled drivers build config 00:01:58.022 net/cxgbe: not in enabled drivers build config 00:01:58.022 net/dpaa: not in enabled drivers build config 00:01:58.022 net/dpaa2: not in enabled drivers build config 00:01:58.022 net/e1000: not in enabled drivers build config 00:01:58.022 net/ena: not in enabled drivers build config 00:01:58.022 net/enetc: not in enabled drivers build config 00:01:58.022 net/enetfec: not in enabled drivers build config 00:01:58.022 net/enic: not in enabled drivers build config 00:01:58.022 net/failsafe: not in enabled drivers build config 00:01:58.022 net/fm10k: not in enabled drivers build config 00:01:58.022 net/gve: not in enabled drivers build config 00:01:58.022 net/hinic: not in enabled drivers build config 00:01:58.022 net/hns3: not in enabled drivers build config 00:01:58.022 net/i40e: not in enabled drivers build config 00:01:58.022 net/iavf: not in enabled drivers build config 00:01:58.022 net/ice: not in enabled drivers build config 00:01:58.022 net/idpf: not in enabled drivers build config 00:01:58.022 net/igc: not in enabled drivers build config 00:01:58.022 net/ionic: not in enabled drivers build config 00:01:58.022 net/ipn3ke: not in enabled drivers build config 00:01:58.022 net/ixgbe: not in enabled drivers build config 00:01:58.022 net/mana: not in enabled drivers build config 00:01:58.022 net/memif: not in enabled drivers build config 00:01:58.022 net/mlx4: not in enabled drivers build config 00:01:58.022 net/mlx5: not in enabled drivers build config 00:01:58.022 net/mvneta: not in enabled drivers build config 00:01:58.022 net/mvpp2: not in enabled drivers build config 00:01:58.022 net/netvsc: not in enabled drivers build config 00:01:58.022 net/nfb: not in enabled drivers build config 00:01:58.022 net/nfp: not in enabled drivers build config 00:01:58.022 net/ngbe: not in enabled drivers build config 00:01:58.022 net/null: not in enabled drivers build config 00:01:58.022 net/octeontx: not in enabled drivers build config 00:01:58.022 net/octeon_ep: not in enabled drivers build config 00:01:58.022 net/pcap: not in enabled drivers build config 00:01:58.022 net/pfe: not in enabled drivers build config 00:01:58.022 net/qede: not in enabled drivers build config 00:01:58.022 net/ring: not in enabled drivers build config 00:01:58.022 net/sfc: not in enabled drivers build config 00:01:58.022 net/softnic: not in enabled drivers build config 00:01:58.022 net/tap: not in enabled drivers build config 00:01:58.022 net/thunderx: not in enabled drivers build config 00:01:58.022 net/txgbe: not in enabled drivers build config 00:01:58.022 net/vdev_netvsc: not in enabled drivers build config 00:01:58.022 net/vhost: not in enabled drivers build config 00:01:58.022 net/virtio: not in enabled drivers build config 00:01:58.022 net/vmxnet3: not in enabled drivers build config 00:01:58.022 raw/*: missing internal dependency, "rawdev" 00:01:58.022 crypto/armv8: not in enabled drivers build config 00:01:58.022 crypto/bcmfs: not in enabled drivers build config 00:01:58.022 crypto/caam_jr: not in enabled drivers build config 00:01:58.022 crypto/ccp: not in enabled drivers build config 00:01:58.022 crypto/cnxk: not in enabled drivers build config 00:01:58.022 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.022 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.022 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.022 crypto/mlx5: not in enabled drivers build config 00:01:58.022 crypto/mvsam: not in enabled drivers build config 00:01:58.022 crypto/nitrox: not in enabled drivers build config 00:01:58.022 crypto/null: not in enabled drivers build config 00:01:58.022 crypto/octeontx: not in enabled drivers build config 00:01:58.022 crypto/openssl: not in enabled drivers build config 00:01:58.022 crypto/scheduler: not in enabled drivers build config 00:01:58.022 crypto/uadk: not in enabled drivers build config 00:01:58.022 crypto/virtio: not in enabled drivers build config 00:01:58.022 compress/isal: not in enabled drivers build config 00:01:58.022 compress/mlx5: not in enabled drivers build config 00:01:58.022 compress/octeontx: not in enabled drivers build config 00:01:58.022 compress/zlib: not in enabled drivers build config 00:01:58.022 regex/*: missing internal dependency, "regexdev" 00:01:58.022 ml/*: missing internal dependency, "mldev" 00:01:58.022 vdpa/ifc: not in enabled drivers build config 00:01:58.022 vdpa/mlx5: not in enabled drivers build config 00:01:58.022 vdpa/nfp: not in enabled drivers build config 00:01:58.022 vdpa/sfc: not in enabled drivers build config 00:01:58.022 event/*: missing internal dependency, "eventdev" 00:01:58.022 baseband/*: missing internal dependency, "bbdev" 00:01:58.022 gpu/*: missing internal dependency, "gpudev" 00:01:58.022 00:01:58.022 00:01:58.022 Build targets in project: 85 00:01:58.022 00:01:58.022 DPDK 23.11.0 00:01:58.022 00:01:58.022 User defined options 00:01:58.022 buildtype : debug 00:01:58.022 default_library : shared 00:01:58.022 libdir : lib 00:01:58.022 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:58.022 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:58.022 c_link_args : 00:01:58.022 cpu_instruction_set: native 00:01:58.022 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:58.022 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:58.022 enable_docs : false 00:01:58.023 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:58.023 enable_kmods : false 00:01:58.023 tests : false 00:01:58.023 00:01:58.023 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.023 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:58.023 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.023 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.023 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.023 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.023 [5/265] Linking static target lib/librte_kvargs.a 00:01:58.023 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.023 [7/265] Linking static target lib/librte_log.a 00:01:58.023 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.023 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.023 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.023 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.023 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.023 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.281 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.281 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.281 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.281 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.281 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.281 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.539 [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.539 [21/265] Linking static target lib/librte_telemetry.a 00:01:58.539 [22/265] Linking target lib/librte_log.so.24.0 00:01:58.539 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.798 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:58.798 [25/265] Linking target lib/librte_kvargs.so.24.0 00:01:58.798 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.798 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.798 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:59.056 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.056 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.314 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.314 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.314 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.314 [34/265] Linking target lib/librte_telemetry.so.24.0 00:01:59.314 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.573 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.573 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.573 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.573 [39/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:59.573 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.573 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.831 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.831 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.831 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.831 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.089 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.089 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.089 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.347 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.347 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.606 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.606 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.606 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.606 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.863 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.864 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.864 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.864 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.121 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.122 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.122 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.122 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.122 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.380 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.380 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.638 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.638 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.638 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.897 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.897 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.897 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.155 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.155 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.155 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.155 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.155 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.155 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.155 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.414 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.414 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.673 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.931 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.931 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.931 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.931 [85/265] Linking static target lib/librte_eal.a 00:02:03.189 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.189 [87/265] Linking static target lib/librte_ring.a 00:02:03.189 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.189 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.448 [90/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.448 [91/265] Linking static target lib/librte_rcu.a 00:02:03.448 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.448 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.706 [94/265] Linking static target lib/librte_mempool.a 00:02:03.706 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.706 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.706 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.706 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.964 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.964 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.964 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.965 [102/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.965 [103/265] Linking static target lib/librte_mbuf.a 00:02:04.223 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.223 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.482 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.482 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.482 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.482 [109/265] Linking static target lib/librte_meter.a 00:02:04.482 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.741 [111/265] Linking static target lib/librte_net.a 00:02:04.741 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.741 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.999 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.999 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.999 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.999 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.999 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.999 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.565 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.565 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.565 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.823 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.823 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.081 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.081 [126/265] Linking static target lib/librte_pci.a 00:02:06.081 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.081 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.081 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.081 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.081 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.339 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.339 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.339 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.339 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.339 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.339 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.339 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:06.339 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.339 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.339 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.339 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.598 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.856 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:06.856 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.856 [146/265] Linking static target lib/librte_cmdline.a 00:02:06.856 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.115 [148/265] Linking static target lib/librte_ethdev.a 00:02:07.115 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.115 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.373 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.373 [152/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.373 [153/265] Linking static target lib/librte_timer.a 00:02:07.373 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.631 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.631 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.631 [157/265] Linking static target lib/librte_hash.a 00:02:07.890 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.890 [159/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.890 [160/265] Linking static target lib/librte_compressdev.a 00:02:07.890 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.148 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.148 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.148 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.148 [165/265] Linking static target lib/librte_dmadev.a 00:02:08.406 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.406 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.664 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.664 [169/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.664 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.664 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.664 [172/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.664 [173/265] Linking static target lib/librte_cryptodev.a 00:02:08.923 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.923 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.923 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.192 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.192 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.192 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.192 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.450 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.450 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.450 [183/265] Linking static target lib/librte_power.a 00:02:09.709 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.709 [185/265] Linking static target lib/librte_reorder.a 00:02:09.709 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.709 [187/265] Linking static target lib/librte_security.a 00:02:09.709 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.967 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.967 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.967 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.226 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.226 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.484 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.484 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.741 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.741 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.741 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.998 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.998 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.998 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:10.998 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.256 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.256 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.514 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.514 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.514 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.514 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.514 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:11.772 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.772 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.772 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.772 [213/265] Linking static target drivers/librte_bus_vdev.a 00:02:11.772 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.772 [215/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.772 [216/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.772 [217/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.772 [218/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.772 [219/265] Linking static target drivers/librte_bus_pci.a 00:02:12.030 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.030 [221/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.030 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.030 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.030 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:12.289 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.920 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.920 [227/265] Linking static target lib/librte_vhost.a 00:02:13.854 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.854 [229/265] Linking target lib/librte_eal.so.24.0 00:02:13.854 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:13.854 [231/265] Linking target lib/librte_ring.so.24.0 00:02:13.854 [232/265] Linking target lib/librte_timer.so.24.0 00:02:13.854 [233/265] Linking target lib/librte_meter.so.24.0 00:02:13.854 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:13.854 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:13.855 [236/265] Linking target lib/librte_pci.so.24.0 00:02:14.112 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:14.112 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:14.112 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:14.112 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:14.112 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:14.112 [242/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.112 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:14.112 [244/265] Linking target lib/librte_mempool.so.24.0 00:02:14.112 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:14.369 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:14.369 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:14.369 [248/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.369 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:14.369 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:14.369 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:14.628 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:14.628 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:14.628 [254/265] Linking target lib/librte_net.so.24.0 00:02:14.628 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:14.628 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:14.629 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:14.629 [258/265] Linking target lib/librte_security.so.24.0 00:02:14.629 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:14.629 [260/265] Linking target lib/librte_hash.so.24.0 00:02:14.629 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:14.888 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:14.888 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:14.888 [264/265] Linking target lib/librte_power.so.24.0 00:02:14.888 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:14.888 INFO: autodetecting backend as ninja 00:02:14.888 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:16.264 CC lib/ut/ut.o 00:02:16.264 CC lib/ut_mock/mock.o 00:02:16.264 CC lib/log/log_flags.o 00:02:16.264 CC lib/log/log.o 00:02:16.264 CC lib/log/log_deprecated.o 00:02:16.264 LIB libspdk_ut_mock.a 00:02:16.264 LIB libspdk_ut.a 00:02:16.264 LIB libspdk_log.a 00:02:16.264 SO libspdk_ut_mock.so.5.0 00:02:16.264 SO libspdk_ut.so.1.0 00:02:16.264 SO libspdk_log.so.6.1 00:02:16.264 SYMLINK libspdk_ut_mock.so 00:02:16.264 SYMLINK libspdk_ut.so 00:02:16.264 SYMLINK libspdk_log.so 00:02:16.523 CC lib/ioat/ioat.o 00:02:16.523 CC lib/util/base64.o 00:02:16.523 CC lib/util/bit_array.o 00:02:16.523 CXX lib/trace_parser/trace.o 00:02:16.523 CC lib/util/cpuset.o 00:02:16.523 CC lib/util/crc16.o 00:02:16.523 CC lib/util/crc32c.o 00:02:16.523 CC lib/util/crc32.o 00:02:16.523 CC lib/dma/dma.o 00:02:16.523 CC lib/vfio_user/host/vfio_user_pci.o 00:02:16.781 CC lib/util/crc32_ieee.o 00:02:16.781 CC lib/util/crc64.o 00:02:16.781 LIB libspdk_dma.a 00:02:16.781 CC lib/util/dif.o 00:02:16.781 CC lib/vfio_user/host/vfio_user.o 00:02:16.781 SO libspdk_dma.so.3.0 00:02:16.781 CC lib/util/fd.o 00:02:16.781 CC lib/util/file.o 00:02:16.781 CC lib/util/hexlify.o 00:02:16.781 SYMLINK libspdk_dma.so 00:02:16.781 CC lib/util/iov.o 00:02:16.781 CC lib/util/math.o 00:02:16.781 LIB libspdk_ioat.a 00:02:16.781 SO libspdk_ioat.so.6.0 00:02:16.781 CC lib/util/pipe.o 00:02:16.781 CC lib/util/strerror_tls.o 00:02:17.039 SYMLINK libspdk_ioat.so 00:02:17.039 CC lib/util/string.o 00:02:17.039 CC lib/util/uuid.o 00:02:17.039 CC lib/util/fd_group.o 00:02:17.039 CC lib/util/xor.o 00:02:17.039 LIB libspdk_vfio_user.a 00:02:17.039 CC lib/util/zipf.o 00:02:17.039 SO libspdk_vfio_user.so.4.0 00:02:17.039 SYMLINK libspdk_vfio_user.so 00:02:17.297 LIB libspdk_util.a 00:02:17.297 SO libspdk_util.so.8.0 00:02:17.555 SYMLINK libspdk_util.so 00:02:17.555 LIB libspdk_trace_parser.a 00:02:17.555 SO libspdk_trace_parser.so.4.0 00:02:17.555 CC lib/env_dpdk/env.o 00:02:17.555 CC lib/env_dpdk/memory.o 00:02:17.555 CC lib/env_dpdk/pci.o 00:02:17.555 CC lib/env_dpdk/init.o 00:02:17.555 CC lib/rdma/common.o 00:02:17.555 CC lib/idxd/idxd.o 00:02:17.555 CC lib/conf/conf.o 00:02:17.555 CC lib/vmd/vmd.o 00:02:17.555 CC lib/json/json_parse.o 00:02:17.814 SYMLINK libspdk_trace_parser.so 00:02:17.814 CC lib/env_dpdk/threads.o 00:02:17.814 CC lib/vmd/led.o 00:02:17.814 CC lib/rdma/rdma_verbs.o 00:02:18.072 LIB libspdk_conf.a 00:02:18.072 CC lib/json/json_util.o 00:02:18.072 SO libspdk_conf.so.5.0 00:02:18.072 CC lib/env_dpdk/pci_ioat.o 00:02:18.072 CC lib/json/json_write.o 00:02:18.072 CC lib/env_dpdk/pci_virtio.o 00:02:18.072 SYMLINK libspdk_conf.so 00:02:18.072 CC lib/env_dpdk/pci_vmd.o 00:02:18.072 LIB libspdk_rdma.a 00:02:18.072 CC lib/env_dpdk/pci_idxd.o 00:02:18.072 SO libspdk_rdma.so.5.0 00:02:18.072 CC lib/idxd/idxd_user.o 00:02:18.072 CC lib/idxd/idxd_kernel.o 00:02:18.072 CC lib/env_dpdk/pci_event.o 00:02:18.330 CC lib/env_dpdk/sigbus_handler.o 00:02:18.330 SYMLINK libspdk_rdma.so 00:02:18.330 CC lib/env_dpdk/pci_dpdk.o 00:02:18.330 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.330 LIB libspdk_vmd.a 00:02:18.330 LIB libspdk_json.a 00:02:18.330 SO libspdk_vmd.so.5.0 00:02:18.330 SO libspdk_json.so.5.1 00:02:18.330 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.330 SYMLINK libspdk_vmd.so 00:02:18.330 SYMLINK libspdk_json.so 00:02:18.330 LIB libspdk_idxd.a 00:02:18.589 SO libspdk_idxd.so.11.0 00:02:18.589 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.589 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.589 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:18.589 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.589 SYMLINK libspdk_idxd.so 00:02:18.847 LIB libspdk_jsonrpc.a 00:02:18.847 SO libspdk_jsonrpc.so.5.1 00:02:18.847 SYMLINK libspdk_jsonrpc.so 00:02:19.105 LIB libspdk_env_dpdk.a 00:02:19.105 CC lib/rpc/rpc.o 00:02:19.105 SO libspdk_env_dpdk.so.13.0 00:02:19.363 SYMLINK libspdk_env_dpdk.so 00:02:19.363 LIB libspdk_rpc.a 00:02:19.363 SO libspdk_rpc.so.5.0 00:02:19.363 SYMLINK libspdk_rpc.so 00:02:19.621 CC lib/sock/sock.o 00:02:19.621 CC lib/sock/sock_rpc.o 00:02:19.621 CC lib/trace/trace.o 00:02:19.621 CC lib/trace/trace_rpc.o 00:02:19.621 CC lib/trace/trace_flags.o 00:02:19.621 CC lib/notify/notify.o 00:02:19.621 CC lib/notify/notify_rpc.o 00:02:19.621 LIB libspdk_notify.a 00:02:19.880 SO libspdk_notify.so.5.0 00:02:19.880 LIB libspdk_trace.a 00:02:19.880 SYMLINK libspdk_notify.so 00:02:19.880 SO libspdk_trace.so.9.0 00:02:19.880 SYMLINK libspdk_trace.so 00:02:19.880 LIB libspdk_sock.a 00:02:20.138 SO libspdk_sock.so.8.0 00:02:20.138 CC lib/thread/thread.o 00:02:20.138 CC lib/thread/iobuf.o 00:02:20.138 SYMLINK libspdk_sock.so 00:02:20.396 CC lib/nvme/nvme_ctrlr.o 00:02:20.396 CC lib/nvme/nvme_fabric.o 00:02:20.396 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:20.396 CC lib/nvme/nvme_ns_cmd.o 00:02:20.396 CC lib/nvme/nvme_ns.o 00:02:20.396 CC lib/nvme/nvme_pcie_common.o 00:02:20.396 CC lib/nvme/nvme_pcie.o 00:02:20.396 CC lib/nvme/nvme_qpair.o 00:02:20.396 CC lib/nvme/nvme.o 00:02:20.962 CC lib/nvme/nvme_quirks.o 00:02:20.962 CC lib/nvme/nvme_transport.o 00:02:21.220 CC lib/nvme/nvme_discovery.o 00:02:21.220 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.220 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.478 CC lib/nvme/nvme_tcp.o 00:02:21.478 CC lib/nvme/nvme_opal.o 00:02:21.478 CC lib/nvme/nvme_io_msg.o 00:02:21.738 CC lib/nvme/nvme_poll_group.o 00:02:21.738 LIB libspdk_thread.a 00:02:21.738 CC lib/nvme/nvme_zns.o 00:02:21.738 SO libspdk_thread.so.9.0 00:02:21.738 CC lib/nvme/nvme_cuse.o 00:02:21.738 CC lib/nvme/nvme_vfio_user.o 00:02:21.738 SYMLINK libspdk_thread.so 00:02:21.738 CC lib/nvme/nvme_rdma.o 00:02:21.996 CC lib/accel/accel.o 00:02:21.997 CC lib/blob/blobstore.o 00:02:21.997 CC lib/blob/request.o 00:02:22.255 CC lib/blob/zeroes.o 00:02:22.513 CC lib/blob/blob_bs_dev.o 00:02:22.513 CC lib/init/json_config.o 00:02:22.513 CC lib/init/subsystem.o 00:02:22.513 CC lib/accel/accel_rpc.o 00:02:22.513 CC lib/virtio/virtio.o 00:02:22.513 CC lib/accel/accel_sw.o 00:02:22.772 CC lib/init/subsystem_rpc.o 00:02:22.772 CC lib/init/rpc.o 00:02:22.772 CC lib/virtio/virtio_vhost_user.o 00:02:22.772 CC lib/virtio/virtio_vfio_user.o 00:02:22.772 CC lib/virtio/virtio_pci.o 00:02:22.772 LIB libspdk_init.a 00:02:23.031 SO libspdk_init.so.4.0 00:02:23.031 SYMLINK libspdk_init.so 00:02:23.031 CC lib/vfu_tgt/tgt_endpoint.o 00:02:23.031 CC lib/vfu_tgt/tgt_rpc.o 00:02:23.031 LIB libspdk_accel.a 00:02:23.031 SO libspdk_accel.so.14.0 00:02:23.031 CC lib/event/reactor.o 00:02:23.031 CC lib/event/app.o 00:02:23.031 CC lib/event/log_rpc.o 00:02:23.031 CC lib/event/app_rpc.o 00:02:23.031 LIB libspdk_virtio.a 00:02:23.031 SYMLINK libspdk_accel.so 00:02:23.290 CC lib/event/scheduler_static.o 00:02:23.290 SO libspdk_virtio.so.6.0 00:02:23.290 CC lib/bdev/bdev.o 00:02:23.290 SYMLINK libspdk_virtio.so 00:02:23.290 CC lib/bdev/bdev_rpc.o 00:02:23.290 CC lib/bdev/bdev_zone.o 00:02:23.290 LIB libspdk_nvme.a 00:02:23.290 CC lib/bdev/part.o 00:02:23.290 LIB libspdk_vfu_tgt.a 00:02:23.290 CC lib/bdev/scsi_nvme.o 00:02:23.290 SO libspdk_vfu_tgt.so.2.0 00:02:23.549 SYMLINK libspdk_vfu_tgt.so 00:02:23.549 SO libspdk_nvme.so.12.0 00:02:23.549 LIB libspdk_event.a 00:02:23.808 SO libspdk_event.so.12.0 00:02:23.808 SYMLINK libspdk_event.so 00:02:23.808 SYMLINK libspdk_nvme.so 00:02:24.744 LIB libspdk_blob.a 00:02:25.003 SO libspdk_blob.so.10.1 00:02:25.003 SYMLINK libspdk_blob.so 00:02:25.262 CC lib/blobfs/blobfs.o 00:02:25.262 CC lib/blobfs/tree.o 00:02:25.262 CC lib/lvol/lvol.o 00:02:25.835 LIB libspdk_bdev.a 00:02:25.835 SO libspdk_bdev.so.14.0 00:02:26.101 LIB libspdk_blobfs.a 00:02:26.102 LIB libspdk_lvol.a 00:02:26.102 SYMLINK libspdk_bdev.so 00:02:26.102 SO libspdk_lvol.so.9.1 00:02:26.102 SO libspdk_blobfs.so.9.0 00:02:26.102 SYMLINK libspdk_lvol.so 00:02:26.102 SYMLINK libspdk_blobfs.so 00:02:26.102 CC lib/scsi/dev.o 00:02:26.102 CC lib/scsi/lun.o 00:02:26.102 CC lib/scsi/port.o 00:02:26.102 CC lib/scsi/scsi_bdev.o 00:02:26.102 CC lib/scsi/scsi_pr.o 00:02:26.102 CC lib/scsi/scsi.o 00:02:26.102 CC lib/nvmf/ctrlr.o 00:02:26.102 CC lib/nbd/nbd.o 00:02:26.102 CC lib/ublk/ublk.o 00:02:26.102 CC lib/ftl/ftl_core.o 00:02:26.361 CC lib/scsi/scsi_rpc.o 00:02:26.361 CC lib/nvmf/ctrlr_discovery.o 00:02:26.361 CC lib/nvmf/ctrlr_bdev.o 00:02:26.620 CC lib/ublk/ublk_rpc.o 00:02:26.620 CC lib/nbd/nbd_rpc.o 00:02:26.620 CC lib/scsi/task.o 00:02:26.620 CC lib/ftl/ftl_init.o 00:02:26.620 CC lib/ftl/ftl_layout.o 00:02:26.620 CC lib/nvmf/subsystem.o 00:02:26.620 LIB libspdk_nbd.a 00:02:26.620 CC lib/nvmf/nvmf.o 00:02:26.620 SO libspdk_nbd.so.6.0 00:02:26.879 LIB libspdk_scsi.a 00:02:26.879 CC lib/ftl/ftl_debug.o 00:02:26.879 SYMLINK libspdk_nbd.so 00:02:26.879 CC lib/ftl/ftl_io.o 00:02:26.879 SO libspdk_scsi.so.8.0 00:02:26.879 LIB libspdk_ublk.a 00:02:26.879 CC lib/nvmf/nvmf_rpc.o 00:02:26.879 SO libspdk_ublk.so.2.0 00:02:26.879 SYMLINK libspdk_scsi.so 00:02:26.879 CC lib/ftl/ftl_sb.o 00:02:26.879 SYMLINK libspdk_ublk.so 00:02:26.879 CC lib/ftl/ftl_l2p.o 00:02:27.138 CC lib/ftl/ftl_l2p_flat.o 00:02:27.138 CC lib/iscsi/conn.o 00:02:27.138 CC lib/iscsi/init_grp.o 00:02:27.138 CC lib/nvmf/transport.o 00:02:27.138 CC lib/ftl/ftl_nv_cache.o 00:02:27.138 CC lib/iscsi/iscsi.o 00:02:27.138 CC lib/iscsi/md5.o 00:02:27.397 CC lib/nvmf/tcp.o 00:02:27.397 CC lib/nvmf/vfio_user.o 00:02:27.655 CC lib/nvmf/rdma.o 00:02:27.655 CC lib/iscsi/param.o 00:02:27.655 CC lib/vhost/vhost.o 00:02:27.655 CC lib/ftl/ftl_band.o 00:02:27.914 CC lib/ftl/ftl_band_ops.o 00:02:27.914 CC lib/ftl/ftl_writer.o 00:02:27.914 CC lib/ftl/ftl_rq.o 00:02:28.173 CC lib/vhost/vhost_rpc.o 00:02:28.173 CC lib/ftl/ftl_reloc.o 00:02:28.173 CC lib/vhost/vhost_scsi.o 00:02:28.173 CC lib/ftl/ftl_l2p_cache.o 00:02:28.173 CC lib/ftl/ftl_p2l.o 00:02:28.431 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.431 CC lib/iscsi/portal_grp.o 00:02:28.689 CC lib/iscsi/tgt_node.o 00:02:28.689 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.690 CC lib/vhost/vhost_blk.o 00:02:28.690 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.948 CC lib/vhost/rte_vhost_user.o 00:02:28.948 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.948 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.948 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.948 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.948 CC lib/iscsi/iscsi_subsystem.o 00:02:28.948 CC lib/iscsi/iscsi_rpc.o 00:02:29.207 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.207 CC lib/iscsi/task.o 00:02:29.207 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.207 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.207 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.207 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.465 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.466 CC lib/ftl/utils/ftl_conf.o 00:02:29.466 CC lib/ftl/utils/ftl_md.o 00:02:29.466 CC lib/ftl/utils/ftl_mempool.o 00:02:29.466 LIB libspdk_iscsi.a 00:02:29.466 CC lib/ftl/utils/ftl_bitmap.o 00:02:29.466 SO libspdk_iscsi.so.7.0 00:02:29.724 CC lib/ftl/utils/ftl_property.o 00:02:29.724 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:29.724 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:29.724 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:29.724 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:29.724 SYMLINK libspdk_iscsi.so 00:02:29.724 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:29.724 LIB libspdk_nvmf.a 00:02:29.724 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:29.982 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:29.982 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:29.982 SO libspdk_nvmf.so.17.0 00:02:29.982 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:29.982 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:29.982 CC lib/ftl/base/ftl_base_dev.o 00:02:29.982 CC lib/ftl/base/ftl_base_bdev.o 00:02:29.982 CC lib/ftl/ftl_trace.o 00:02:29.982 LIB libspdk_vhost.a 00:02:29.982 SO libspdk_vhost.so.7.1 00:02:29.982 SYMLINK libspdk_nvmf.so 00:02:30.241 SYMLINK libspdk_vhost.so 00:02:30.241 LIB libspdk_ftl.a 00:02:30.499 SO libspdk_ftl.so.8.0 00:02:30.762 SYMLINK libspdk_ftl.so 00:02:31.020 CC module/vfu_device/vfu_virtio.o 00:02:31.020 CC module/env_dpdk/env_dpdk_rpc.o 00:02:31.020 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.020 CC module/sock/posix/posix.o 00:02:31.020 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.020 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.020 CC module/accel/dsa/accel_dsa.o 00:02:31.020 CC module/blob/bdev/blob_bdev.o 00:02:31.020 CC module/accel/error/accel_error.o 00:02:31.020 CC module/accel/ioat/accel_ioat.o 00:02:31.020 LIB libspdk_env_dpdk_rpc.a 00:02:31.020 SO libspdk_env_dpdk_rpc.so.5.0 00:02:31.279 LIB libspdk_scheduler_gscheduler.a 00:02:31.279 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.279 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.279 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.279 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:31.279 SO libspdk_scheduler_gscheduler.so.3.0 00:02:31.279 LIB libspdk_scheduler_dynamic.a 00:02:31.279 CC module/accel/error/accel_error_rpc.o 00:02:31.279 SO libspdk_scheduler_dynamic.so.3.0 00:02:31.279 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.279 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.279 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.279 CC module/vfu_device/vfu_virtio_blk.o 00:02:31.279 CC module/vfu_device/vfu_virtio_scsi.o 00:02:31.279 LIB libspdk_blob_bdev.a 00:02:31.279 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.279 CC module/vfu_device/vfu_virtio_rpc.o 00:02:31.279 SO libspdk_blob_bdev.so.10.1 00:02:31.279 LIB libspdk_accel_ioat.a 00:02:31.279 CC module/accel/iaa/accel_iaa.o 00:02:31.279 SO libspdk_accel_ioat.so.5.0 00:02:31.279 LIB libspdk_accel_error.a 00:02:31.279 SYMLINK libspdk_blob_bdev.so 00:02:31.279 LIB libspdk_accel_dsa.a 00:02:31.537 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.537 SO libspdk_accel_error.so.1.0 00:02:31.537 SYMLINK libspdk_accel_ioat.so 00:02:31.537 SO libspdk_accel_dsa.so.4.0 00:02:31.537 SYMLINK libspdk_accel_error.so 00:02:31.537 SYMLINK libspdk_accel_dsa.so 00:02:31.537 LIB libspdk_accel_iaa.a 00:02:31.537 SO libspdk_accel_iaa.so.2.0 00:02:31.796 CC module/sock/uring/uring.o 00:02:31.796 CC module/bdev/delay/vbdev_delay.o 00:02:31.796 LIB libspdk_vfu_device.a 00:02:31.796 CC module/bdev/error/vbdev_error.o 00:02:31.796 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.796 CC module/bdev/gpt/gpt.o 00:02:31.796 SYMLINK libspdk_accel_iaa.so 00:02:31.796 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.796 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.796 CC module/bdev/malloc/bdev_malloc.o 00:02:31.796 LIB libspdk_sock_posix.a 00:02:31.796 SO libspdk_vfu_device.so.2.0 00:02:31.796 SO libspdk_sock_posix.so.5.0 00:02:31.796 SYMLINK libspdk_vfu_device.so 00:02:31.796 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.796 SYMLINK libspdk_sock_posix.so 00:02:32.055 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:32.055 CC module/bdev/error/vbdev_error_rpc.o 00:02:32.055 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.055 CC module/bdev/null/bdev_null.o 00:02:32.055 LIB libspdk_bdev_gpt.a 00:02:32.055 SO libspdk_bdev_gpt.so.5.0 00:02:32.055 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:32.055 LIB libspdk_bdev_error.a 00:02:32.055 LIB libspdk_blobfs_bdev.a 00:02:32.055 SO libspdk_bdev_error.so.5.0 00:02:32.055 SO libspdk_blobfs_bdev.so.5.0 00:02:32.055 SYMLINK libspdk_bdev_gpt.so 00:02:32.055 CC module/bdev/null/bdev_null_rpc.o 00:02:32.055 LIB libspdk_bdev_malloc.a 00:02:32.314 SYMLINK libspdk_bdev_error.so 00:02:32.314 SYMLINK libspdk_blobfs_bdev.so 00:02:32.314 SO libspdk_bdev_malloc.so.5.0 00:02:32.314 LIB libspdk_bdev_lvol.a 00:02:32.314 LIB libspdk_bdev_delay.a 00:02:32.314 SYMLINK libspdk_bdev_malloc.so 00:02:32.314 SO libspdk_bdev_lvol.so.5.0 00:02:32.314 SO libspdk_bdev_delay.so.5.0 00:02:32.314 CC module/bdev/nvme/bdev_nvme.o 00:02:32.314 CC module/bdev/passthru/vbdev_passthru.o 00:02:32.314 CC module/bdev/raid/bdev_raid.o 00:02:32.314 CC module/bdev/split/vbdev_split.o 00:02:32.314 LIB libspdk_bdev_null.a 00:02:32.314 SYMLINK libspdk_bdev_lvol.so 00:02:32.314 SO libspdk_bdev_null.so.5.0 00:02:32.314 SYMLINK libspdk_bdev_delay.so 00:02:32.314 CC module/bdev/raid/bdev_raid_rpc.o 00:02:32.314 LIB libspdk_sock_uring.a 00:02:32.314 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:32.314 CC module/bdev/uring/bdev_uring.o 00:02:32.314 SO libspdk_sock_uring.so.4.0 00:02:32.573 SYMLINK libspdk_bdev_null.so 00:02:32.573 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:32.573 CC module/bdev/aio/bdev_aio.o 00:02:32.573 SYMLINK libspdk_sock_uring.so 00:02:32.573 CC module/bdev/split/vbdev_split_rpc.o 00:02:32.573 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:32.573 CC module/bdev/nvme/nvme_rpc.o 00:02:32.573 CC module/bdev/nvme/bdev_mdns_client.o 00:02:32.573 LIB libspdk_bdev_split.a 00:02:32.831 SO libspdk_bdev_split.so.5.0 00:02:32.831 LIB libspdk_bdev_passthru.a 00:02:32.831 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:32.831 SO libspdk_bdev_passthru.so.5.0 00:02:32.831 SYMLINK libspdk_bdev_split.so 00:02:32.831 CC module/bdev/nvme/vbdev_opal.o 00:02:32.831 CC module/bdev/uring/bdev_uring_rpc.o 00:02:32.831 CC module/bdev/aio/bdev_aio_rpc.o 00:02:32.831 SYMLINK libspdk_bdev_passthru.so 00:02:32.831 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:32.831 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:32.831 CC module/bdev/raid/bdev_raid_sb.o 00:02:32.831 LIB libspdk_bdev_zone_block.a 00:02:33.089 SO libspdk_bdev_zone_block.so.5.0 00:02:33.089 LIB libspdk_bdev_uring.a 00:02:33.089 SO libspdk_bdev_uring.so.5.0 00:02:33.089 SYMLINK libspdk_bdev_zone_block.so 00:02:33.089 LIB libspdk_bdev_aio.a 00:02:33.089 SYMLINK libspdk_bdev_uring.so 00:02:33.089 CC module/bdev/raid/raid0.o 00:02:33.089 CC module/bdev/raid/raid1.o 00:02:33.089 SO libspdk_bdev_aio.so.5.0 00:02:33.089 CC module/bdev/raid/concat.o 00:02:33.089 CC module/bdev/ftl/bdev_ftl.o 00:02:33.089 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.089 SYMLINK libspdk_bdev_aio.so 00:02:33.347 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.347 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.347 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.347 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.347 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.347 LIB libspdk_bdev_raid.a 00:02:33.605 SO libspdk_bdev_raid.so.5.0 00:02:33.605 LIB libspdk_bdev_ftl.a 00:02:33.605 SO libspdk_bdev_ftl.so.5.0 00:02:33.606 SYMLINK libspdk_bdev_raid.so 00:02:33.606 SYMLINK libspdk_bdev_ftl.so 00:02:33.606 LIB libspdk_bdev_iscsi.a 00:02:33.606 SO libspdk_bdev_iscsi.so.5.0 00:02:33.864 SYMLINK libspdk_bdev_iscsi.so 00:02:33.864 LIB libspdk_bdev_virtio.a 00:02:33.864 SO libspdk_bdev_virtio.so.5.0 00:02:33.864 SYMLINK libspdk_bdev_virtio.so 00:02:34.800 LIB libspdk_bdev_nvme.a 00:02:34.800 SO libspdk_bdev_nvme.so.6.0 00:02:34.800 SYMLINK libspdk_bdev_nvme.so 00:02:35.058 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:35.058 CC module/event/subsystems/vmd/vmd.o 00:02:35.059 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:35.059 CC module/event/subsystems/iobuf/iobuf.o 00:02:35.059 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:35.059 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:35.059 CC module/event/subsystems/scheduler/scheduler.o 00:02:35.059 CC module/event/subsystems/sock/sock.o 00:02:35.317 LIB libspdk_event_scheduler.a 00:02:35.317 LIB libspdk_event_sock.a 00:02:35.317 LIB libspdk_event_vhost_blk.a 00:02:35.317 LIB libspdk_event_iobuf.a 00:02:35.317 LIB libspdk_event_vmd.a 00:02:35.317 SO libspdk_event_sock.so.4.0 00:02:35.317 SO libspdk_event_scheduler.so.3.0 00:02:35.317 SO libspdk_event_vhost_blk.so.2.0 00:02:35.317 SO libspdk_event_iobuf.so.2.0 00:02:35.317 LIB libspdk_event_vfu_tgt.a 00:02:35.317 SO libspdk_event_vmd.so.5.0 00:02:35.317 SO libspdk_event_vfu_tgt.so.2.0 00:02:35.317 SYMLINK libspdk_event_vhost_blk.so 00:02:35.317 SYMLINK libspdk_event_sock.so 00:02:35.317 SYMLINK libspdk_event_scheduler.so 00:02:35.317 SYMLINK libspdk_event_iobuf.so 00:02:35.317 SYMLINK libspdk_event_vmd.so 00:02:35.317 SYMLINK libspdk_event_vfu_tgt.so 00:02:35.575 CC module/event/subsystems/accel/accel.o 00:02:35.834 LIB libspdk_event_accel.a 00:02:35.834 SO libspdk_event_accel.so.5.0 00:02:35.834 SYMLINK libspdk_event_accel.so 00:02:36.093 CC module/event/subsystems/bdev/bdev.o 00:02:36.093 LIB libspdk_event_bdev.a 00:02:36.093 SO libspdk_event_bdev.so.5.0 00:02:36.351 SYMLINK libspdk_event_bdev.so 00:02:36.351 CC module/event/subsystems/nbd/nbd.o 00:02:36.351 CC module/event/subsystems/ublk/ublk.o 00:02:36.351 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:36.351 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:36.351 CC module/event/subsystems/scsi/scsi.o 00:02:36.609 LIB libspdk_event_nbd.a 00:02:36.609 SO libspdk_event_nbd.so.5.0 00:02:36.609 LIB libspdk_event_ublk.a 00:02:36.609 SYMLINK libspdk_event_nbd.so 00:02:36.609 LIB libspdk_event_scsi.a 00:02:36.609 SO libspdk_event_ublk.so.2.0 00:02:36.609 SO libspdk_event_scsi.so.5.0 00:02:36.609 SYMLINK libspdk_event_ublk.so 00:02:36.868 LIB libspdk_event_nvmf.a 00:02:36.868 SYMLINK libspdk_event_scsi.so 00:02:36.868 SO libspdk_event_nvmf.so.5.0 00:02:36.868 SYMLINK libspdk_event_nvmf.so 00:02:36.868 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.868 CC module/event/subsystems/iscsi/iscsi.o 00:02:37.128 LIB libspdk_event_vhost_scsi.a 00:02:37.128 LIB libspdk_event_iscsi.a 00:02:37.128 SO libspdk_event_vhost_scsi.so.2.0 00:02:37.128 SO libspdk_event_iscsi.so.5.0 00:02:37.128 SYMLINK libspdk_event_vhost_scsi.so 00:02:37.128 SYMLINK libspdk_event_iscsi.so 00:02:37.387 SO libspdk.so.5.0 00:02:37.387 SYMLINK libspdk.so 00:02:37.387 CC app/spdk_nvme_perf/perf.o 00:02:37.387 CC app/spdk_lspci/spdk_lspci.o 00:02:37.387 CC app/trace_record/trace_record.o 00:02:37.387 CC app/spdk_nvme_identify/identify.o 00:02:37.387 CXX app/trace/trace.o 00:02:37.645 CC app/nvmf_tgt/nvmf_main.o 00:02:37.645 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.645 CC examples/accel/perf/accel_perf.o 00:02:37.645 CC app/spdk_tgt/spdk_tgt.o 00:02:37.645 CC test/accel/dif/dif.o 00:02:37.645 LINK spdk_lspci 00:02:37.645 LINK spdk_trace_record 00:02:37.904 LINK nvmf_tgt 00:02:37.904 LINK iscsi_tgt 00:02:37.904 LINK spdk_tgt 00:02:37.904 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.904 LINK spdk_trace 00:02:37.904 CC app/spdk_top/spdk_top.o 00:02:37.904 LINK dif 00:02:38.162 LINK accel_perf 00:02:38.162 CC app/vhost/vhost.o 00:02:38.162 LINK spdk_nvme_discover 00:02:38.162 CC app/spdk_dd/spdk_dd.o 00:02:38.162 CC app/fio/nvme/fio_plugin.o 00:02:38.421 CC examples/bdev/hello_world/hello_bdev.o 00:02:38.421 LINK vhost 00:02:38.421 LINK spdk_nvme_identify 00:02:38.421 LINK spdk_nvme_perf 00:02:38.421 CC test/app/histogram_perf/histogram_perf.o 00:02:38.421 CC test/app/bdev_svc/bdev_svc.o 00:02:38.421 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:38.697 LINK hello_bdev 00:02:38.697 LINK histogram_perf 00:02:38.697 LINK spdk_dd 00:02:38.697 CC app/fio/bdev/fio_plugin.o 00:02:38.697 LINK bdev_svc 00:02:38.697 CC test/bdev/bdevio/bdevio.o 00:02:38.965 CC test/blobfs/mkfs/mkfs.o 00:02:38.965 LINK nvme_fuzz 00:02:38.965 LINK spdk_nvme 00:02:38.965 LINK spdk_top 00:02:38.965 TEST_HEADER include/spdk/accel.h 00:02:38.965 TEST_HEADER include/spdk/accel_module.h 00:02:38.965 TEST_HEADER include/spdk/assert.h 00:02:38.965 TEST_HEADER include/spdk/barrier.h 00:02:38.965 TEST_HEADER include/spdk/base64.h 00:02:38.965 CC examples/bdev/bdevperf/bdevperf.o 00:02:38.965 TEST_HEADER include/spdk/bdev.h 00:02:38.965 TEST_HEADER include/spdk/bdev_module.h 00:02:38.965 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.965 TEST_HEADER include/spdk/bit_array.h 00:02:38.965 TEST_HEADER include/spdk/bit_pool.h 00:02:38.965 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.965 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.965 TEST_HEADER include/spdk/blobfs.h 00:02:38.965 TEST_HEADER include/spdk/blob.h 00:02:38.965 TEST_HEADER include/spdk/conf.h 00:02:38.965 TEST_HEADER include/spdk/config.h 00:02:38.965 TEST_HEADER include/spdk/cpuset.h 00:02:38.965 TEST_HEADER include/spdk/crc16.h 00:02:38.965 TEST_HEADER include/spdk/crc32.h 00:02:38.965 TEST_HEADER include/spdk/crc64.h 00:02:38.965 TEST_HEADER include/spdk/dif.h 00:02:38.965 TEST_HEADER include/spdk/dma.h 00:02:38.965 TEST_HEADER include/spdk/endian.h 00:02:38.965 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.965 TEST_HEADER include/spdk/env.h 00:02:38.965 TEST_HEADER include/spdk/event.h 00:02:38.965 TEST_HEADER include/spdk/fd_group.h 00:02:38.965 LINK mkfs 00:02:38.965 TEST_HEADER include/spdk/fd.h 00:02:38.965 TEST_HEADER include/spdk/file.h 00:02:38.965 TEST_HEADER include/spdk/ftl.h 00:02:38.965 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.965 TEST_HEADER include/spdk/hexlify.h 00:02:38.965 TEST_HEADER include/spdk/histogram_data.h 00:02:38.965 TEST_HEADER include/spdk/idxd.h 00:02:38.965 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.965 TEST_HEADER include/spdk/init.h 00:02:38.965 TEST_HEADER include/spdk/ioat.h 00:02:38.965 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.965 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.965 TEST_HEADER include/spdk/json.h 00:02:38.965 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.965 TEST_HEADER include/spdk/likely.h 00:02:38.965 TEST_HEADER include/spdk/log.h 00:02:38.965 TEST_HEADER include/spdk/lvol.h 00:02:38.965 TEST_HEADER include/spdk/memory.h 00:02:38.965 TEST_HEADER include/spdk/mmio.h 00:02:38.965 TEST_HEADER include/spdk/nbd.h 00:02:38.965 TEST_HEADER include/spdk/notify.h 00:02:38.965 TEST_HEADER include/spdk/nvme.h 00:02:38.965 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.965 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.223 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.223 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.223 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.223 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.223 CC test/event/event_perf/event_perf.o 00:02:39.223 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.223 TEST_HEADER include/spdk/nvmf.h 00:02:39.223 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.223 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.223 TEST_HEADER include/spdk/opal.h 00:02:39.223 TEST_HEADER include/spdk/opal_spec.h 00:02:39.223 TEST_HEADER include/spdk/pci_ids.h 00:02:39.223 TEST_HEADER include/spdk/pipe.h 00:02:39.223 TEST_HEADER include/spdk/queue.h 00:02:39.223 TEST_HEADER include/spdk/reduce.h 00:02:39.223 CC test/dma/test_dma/test_dma.o 00:02:39.223 TEST_HEADER include/spdk/rpc.h 00:02:39.223 TEST_HEADER include/spdk/scheduler.h 00:02:39.223 TEST_HEADER include/spdk/scsi.h 00:02:39.223 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.223 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.223 TEST_HEADER include/spdk/sock.h 00:02:39.223 TEST_HEADER include/spdk/stdinc.h 00:02:39.223 TEST_HEADER include/spdk/string.h 00:02:39.223 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.223 TEST_HEADER include/spdk/thread.h 00:02:39.223 TEST_HEADER include/spdk/trace.h 00:02:39.223 TEST_HEADER include/spdk/trace_parser.h 00:02:39.223 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.223 TEST_HEADER include/spdk/tree.h 00:02:39.223 TEST_HEADER include/spdk/ublk.h 00:02:39.223 TEST_HEADER include/spdk/util.h 00:02:39.223 TEST_HEADER include/spdk/uuid.h 00:02:39.223 TEST_HEADER include/spdk/version.h 00:02:39.223 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.223 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.223 TEST_HEADER include/spdk/vhost.h 00:02:39.223 TEST_HEADER include/spdk/vmd.h 00:02:39.223 TEST_HEADER include/spdk/xor.h 00:02:39.223 TEST_HEADER include/spdk/zipf.h 00:02:39.223 CXX test/cpp_headers/accel.o 00:02:39.223 LINK bdevio 00:02:39.223 CXX test/cpp_headers/accel_module.o 00:02:39.223 LINK spdk_bdev 00:02:39.223 LINK event_perf 00:02:39.481 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.481 CC test/env/vtophys/vtophys.o 00:02:39.481 CXX test/cpp_headers/assert.o 00:02:39.481 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.481 CXX test/cpp_headers/barrier.o 00:02:39.481 LINK test_dma 00:02:39.481 CC test/event/reactor/reactor.o 00:02:39.481 LINK vtophys 00:02:39.739 LINK env_dpdk_post_init 00:02:39.739 CXX test/cpp_headers/base64.o 00:02:39.739 LINK reactor 00:02:39.739 LINK vhost_fuzz 00:02:39.739 LINK bdevperf 00:02:39.739 CC examples/blob/hello_world/hello_blob.o 00:02:39.739 LINK mem_callbacks 00:02:39.739 CXX test/cpp_headers/bdev.o 00:02:39.998 CC examples/ioat/perf/perf.o 00:02:39.998 CC examples/nvme/hello_world/hello_world.o 00:02:39.998 CC test/event/reactor_perf/reactor_perf.o 00:02:39.998 CC examples/sock/hello_world/hello_sock.o 00:02:39.998 LINK hello_blob 00:02:39.998 CXX test/cpp_headers/bdev_module.o 00:02:39.998 LINK reactor_perf 00:02:39.998 CC test/env/memory/memory_ut.o 00:02:39.998 CC examples/blob/cli/blobcli.o 00:02:40.257 CC test/lvol/esnap/esnap.o 00:02:40.257 LINK ioat_perf 00:02:40.257 LINK hello_world 00:02:40.257 LINK hello_sock 00:02:40.257 CXX test/cpp_headers/bdev_zone.o 00:02:40.257 CC test/event/app_repeat/app_repeat.o 00:02:40.257 CC test/event/scheduler/scheduler.o 00:02:40.516 CC examples/nvme/reconnect/reconnect.o 00:02:40.516 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:40.516 CC examples/ioat/verify/verify.o 00:02:40.516 CXX test/cpp_headers/bit_array.o 00:02:40.516 LINK app_repeat 00:02:40.516 LINK blobcli 00:02:40.516 LINK scheduler 00:02:40.516 CXX test/cpp_headers/bit_pool.o 00:02:40.775 LINK verify 00:02:40.775 LINK reconnect 00:02:40.775 CXX test/cpp_headers/blob_bdev.o 00:02:40.775 CC test/nvme/aer/aer.o 00:02:40.775 LINK iscsi_fuzz 00:02:41.034 CC test/rpc_client/rpc_client_test.o 00:02:41.034 CC test/app/jsoncat/jsoncat.o 00:02:41.034 CC test/app/stub/stub.o 00:02:41.034 LINK nvme_manage 00:02:41.034 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.034 LINK memory_ut 00:02:41.034 LINK jsoncat 00:02:41.034 CC examples/vmd/lsvmd/lsvmd.o 00:02:41.034 LINK stub 00:02:41.034 LINK aer 00:02:41.034 LINK rpc_client_test 00:02:41.293 CC examples/nvme/arbitration/arbitration.o 00:02:41.293 CC examples/nvme/hotplug/hotplug.o 00:02:41.293 CXX test/cpp_headers/blobfs.o 00:02:41.293 LINK lsvmd 00:02:41.293 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.293 CC test/env/pci/pci_ut.o 00:02:41.293 CC test/nvme/reset/reset.o 00:02:41.293 CC examples/nvme/abort/abort.o 00:02:41.293 CC test/nvme/sgl/sgl.o 00:02:41.293 CXX test/cpp_headers/blob.o 00:02:41.551 LINK hotplug 00:02:41.551 CC examples/vmd/led/led.o 00:02:41.551 LINK cmb_copy 00:02:41.551 LINK arbitration 00:02:41.551 CXX test/cpp_headers/conf.o 00:02:41.551 CXX test/cpp_headers/config.o 00:02:41.551 LINK led 00:02:41.551 LINK reset 00:02:41.551 LINK sgl 00:02:41.810 CC test/nvme/e2edp/nvme_dp.o 00:02:41.810 CXX test/cpp_headers/cpuset.o 00:02:41.810 LINK abort 00:02:41.810 CC test/nvme/overhead/overhead.o 00:02:41.810 LINK pci_ut 00:02:41.810 CC test/nvme/err_injection/err_injection.o 00:02:41.810 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.810 CC test/nvme/startup/startup.o 00:02:41.810 CXX test/cpp_headers/crc16.o 00:02:41.810 CC test/nvme/reserve/reserve.o 00:02:42.068 LINK err_injection 00:02:42.068 CC test/nvme/simple_copy/simple_copy.o 00:02:42.068 LINK nvme_dp 00:02:42.068 LINK overhead 00:02:42.068 CXX test/cpp_headers/crc32.o 00:02:42.068 LINK pmr_persistence 00:02:42.068 LINK startup 00:02:42.068 CC test/nvme/connect_stress/connect_stress.o 00:02:42.327 CXX test/cpp_headers/crc64.o 00:02:42.327 CC test/nvme/boot_partition/boot_partition.o 00:02:42.327 CXX test/cpp_headers/dif.o 00:02:42.327 CXX test/cpp_headers/dma.o 00:02:42.327 LINK simple_copy 00:02:42.327 LINK reserve 00:02:42.327 LINK connect_stress 00:02:42.327 CC test/nvme/compliance/nvme_compliance.o 00:02:42.327 CC examples/nvmf/nvmf/nvmf.o 00:02:42.585 LINK boot_partition 00:02:42.585 CXX test/cpp_headers/endian.o 00:02:42.585 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.585 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.585 CC test/nvme/fdp/fdp.o 00:02:42.585 CC examples/util/zipf/zipf.o 00:02:42.585 CC test/nvme/cuse/cuse.o 00:02:42.585 CXX test/cpp_headers/env_dpdk.o 00:02:42.844 LINK nvme_compliance 00:02:42.844 LINK doorbell_aers 00:02:42.844 LINK fused_ordering 00:02:42.844 LINK nvmf 00:02:42.844 LINK zipf 00:02:42.844 CC examples/thread/thread/thread_ex.o 00:02:42.844 LINK fdp 00:02:42.844 CXX test/cpp_headers/env.o 00:02:42.844 CXX test/cpp_headers/event.o 00:02:42.844 CXX test/cpp_headers/fd_group.o 00:02:43.102 CXX test/cpp_headers/fd.o 00:02:43.102 CXX test/cpp_headers/file.o 00:02:43.102 CXX test/cpp_headers/ftl.o 00:02:43.102 LINK thread 00:02:43.102 CC examples/idxd/perf/perf.o 00:02:43.102 CXX test/cpp_headers/gpt_spec.o 00:02:43.102 CXX test/cpp_headers/hexlify.o 00:02:43.102 CC test/thread/poller_perf/poller_perf.o 00:02:43.102 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.360 CXX test/cpp_headers/histogram_data.o 00:02:43.360 CXX test/cpp_headers/idxd.o 00:02:43.360 CXX test/cpp_headers/idxd_spec.o 00:02:43.360 CXX test/cpp_headers/init.o 00:02:43.360 LINK poller_perf 00:02:43.360 CXX test/cpp_headers/ioat.o 00:02:43.360 LINK interrupt_tgt 00:02:43.360 CXX test/cpp_headers/ioat_spec.o 00:02:43.360 LINK idxd_perf 00:02:43.360 CXX test/cpp_headers/iscsi_spec.o 00:02:43.360 CXX test/cpp_headers/json.o 00:02:43.360 CXX test/cpp_headers/jsonrpc.o 00:02:43.619 CXX test/cpp_headers/likely.o 00:02:43.619 CXX test/cpp_headers/log.o 00:02:43.619 CXX test/cpp_headers/lvol.o 00:02:43.619 CXX test/cpp_headers/memory.o 00:02:43.619 CXX test/cpp_headers/mmio.o 00:02:43.619 CXX test/cpp_headers/nbd.o 00:02:43.619 CXX test/cpp_headers/notify.o 00:02:43.619 CXX test/cpp_headers/nvme.o 00:02:43.619 CXX test/cpp_headers/nvme_intel.o 00:02:43.619 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.619 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.619 LINK cuse 00:02:43.877 CXX test/cpp_headers/nvme_spec.o 00:02:43.877 CXX test/cpp_headers/nvme_zns.o 00:02:43.877 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.877 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.877 CXX test/cpp_headers/nvmf.o 00:02:43.877 CXX test/cpp_headers/nvmf_spec.o 00:02:43.877 CXX test/cpp_headers/nvmf_transport.o 00:02:43.877 CXX test/cpp_headers/opal.o 00:02:43.877 CXX test/cpp_headers/opal_spec.o 00:02:43.877 CXX test/cpp_headers/pci_ids.o 00:02:43.877 CXX test/cpp_headers/pipe.o 00:02:44.136 CXX test/cpp_headers/queue.o 00:02:44.136 CXX test/cpp_headers/reduce.o 00:02:44.136 CXX test/cpp_headers/rpc.o 00:02:44.136 CXX test/cpp_headers/scheduler.o 00:02:44.136 CXX test/cpp_headers/scsi.o 00:02:44.136 CXX test/cpp_headers/scsi_spec.o 00:02:44.136 CXX test/cpp_headers/sock.o 00:02:44.136 CXX test/cpp_headers/stdinc.o 00:02:44.136 CXX test/cpp_headers/string.o 00:02:44.136 CXX test/cpp_headers/thread.o 00:02:44.136 CXX test/cpp_headers/trace.o 00:02:44.136 CXX test/cpp_headers/trace_parser.o 00:02:44.136 CXX test/cpp_headers/tree.o 00:02:44.136 CXX test/cpp_headers/ublk.o 00:02:44.136 CXX test/cpp_headers/util.o 00:02:44.136 CXX test/cpp_headers/uuid.o 00:02:44.136 CXX test/cpp_headers/version.o 00:02:44.136 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.394 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.394 CXX test/cpp_headers/vhost.o 00:02:44.394 CXX test/cpp_headers/vmd.o 00:02:44.394 CXX test/cpp_headers/xor.o 00:02:44.394 CXX test/cpp_headers/zipf.o 00:02:44.961 LINK esnap 00:02:45.528 00:02:45.529 real 1m0.806s 00:02:45.529 user 6m33.323s 00:02:45.529 sys 1m22.907s 00:02:45.529 18:01:03 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:45.529 ************************************ 00:02:45.529 END TEST make 00:02:45.529 ************************************ 00:02:45.529 18:01:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.529 18:01:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:45.529 18:01:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:45.529 18:01:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:45.529 18:01:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:45.529 18:01:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:45.529 18:01:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:45.529 18:01:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:45.529 18:01:04 -- scripts/common.sh@335 -- # IFS=.-: 00:02:45.529 18:01:04 -- scripts/common.sh@335 -- # read -ra ver1 00:02:45.529 18:01:04 -- scripts/common.sh@336 -- # IFS=.-: 00:02:45.529 18:01:04 -- scripts/common.sh@336 -- # read -ra ver2 00:02:45.529 18:01:04 -- scripts/common.sh@337 -- # local 'op=<' 00:02:45.529 18:01:04 -- scripts/common.sh@339 -- # ver1_l=2 00:02:45.529 18:01:04 -- scripts/common.sh@340 -- # ver2_l=1 00:02:45.529 18:01:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:45.529 18:01:04 -- scripts/common.sh@343 -- # case "$op" in 00:02:45.529 18:01:04 -- scripts/common.sh@344 -- # : 1 00:02:45.529 18:01:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:45.529 18:01:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:45.529 18:01:04 -- scripts/common.sh@364 -- # decimal 1 00:02:45.529 18:01:04 -- scripts/common.sh@352 -- # local d=1 00:02:45.529 18:01:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:45.529 18:01:04 -- scripts/common.sh@354 -- # echo 1 00:02:45.529 18:01:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:45.529 18:01:04 -- scripts/common.sh@365 -- # decimal 2 00:02:45.529 18:01:04 -- scripts/common.sh@352 -- # local d=2 00:02:45.529 18:01:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:45.529 18:01:04 -- scripts/common.sh@354 -- # echo 2 00:02:45.529 18:01:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:45.529 18:01:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:45.529 18:01:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:45.529 18:01:04 -- scripts/common.sh@367 -- # return 0 00:02:45.529 18:01:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:45.529 18:01:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.529 --rc genhtml_branch_coverage=1 00:02:45.529 --rc genhtml_function_coverage=1 00:02:45.529 --rc genhtml_legend=1 00:02:45.529 --rc geninfo_all_blocks=1 00:02:45.529 --rc geninfo_unexecuted_blocks=1 00:02:45.529 00:02:45.529 ' 00:02:45.529 18:01:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.529 --rc genhtml_branch_coverage=1 00:02:45.529 --rc genhtml_function_coverage=1 00:02:45.529 --rc genhtml_legend=1 00:02:45.529 --rc geninfo_all_blocks=1 00:02:45.529 --rc geninfo_unexecuted_blocks=1 00:02:45.529 00:02:45.529 ' 00:02:45.529 18:01:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.529 --rc genhtml_branch_coverage=1 00:02:45.529 --rc genhtml_function_coverage=1 00:02:45.529 --rc genhtml_legend=1 00:02:45.529 --rc geninfo_all_blocks=1 00:02:45.529 --rc geninfo_unexecuted_blocks=1 00:02:45.529 00:02:45.529 ' 00:02:45.529 18:01:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.529 --rc genhtml_branch_coverage=1 00:02:45.529 --rc genhtml_function_coverage=1 00:02:45.529 --rc genhtml_legend=1 00:02:45.529 --rc geninfo_all_blocks=1 00:02:45.529 --rc geninfo_unexecuted_blocks=1 00:02:45.529 00:02:45.529 ' 00:02:45.529 18:01:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:45.529 18:01:04 -- nvmf/common.sh@7 -- # uname -s 00:02:45.529 18:01:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:45.529 18:01:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:45.529 18:01:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:45.529 18:01:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:45.529 18:01:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:45.529 18:01:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:45.529 18:01:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:45.529 18:01:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:45.529 18:01:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:45.529 18:01:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:45.529 18:01:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:02:45.529 18:01:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:02:45.529 18:01:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:45.529 18:01:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:45.529 18:01:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:45.529 18:01:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:45.529 18:01:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:45.529 18:01:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.529 18:01:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.529 18:01:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.529 18:01:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.529 18:01:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.529 18:01:04 -- paths/export.sh@5 -- # export PATH 00:02:45.529 18:01:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.529 18:01:04 -- nvmf/common.sh@46 -- # : 0 00:02:45.529 18:01:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:45.529 18:01:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:45.529 18:01:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:45.529 18:01:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:45.529 18:01:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:45.529 18:01:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:45.529 18:01:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:45.529 18:01:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:45.529 18:01:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:45.529 18:01:04 -- spdk/autotest.sh@32 -- # uname -s 00:02:45.529 18:01:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:45.529 18:01:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:45.529 18:01:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:45.529 18:01:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:45.529 18:01:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:45.529 18:01:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:45.529 18:01:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:45.529 18:01:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:45.529 18:01:04 -- spdk/autotest.sh@48 -- # udevadm_pid=48007 00:02:45.529 18:01:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:45.529 18:01:04 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:45.529 18:01:04 -- spdk/autotest.sh@54 -- # echo 48036 00:02:45.529 18:01:04 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:45.529 18:01:04 -- spdk/autotest.sh@56 -- # echo 48040 00:02:45.529 18:01:04 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:45.788 18:01:04 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:45.788 18:01:04 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:45.788 18:01:04 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:45.788 18:01:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:45.788 18:01:04 -- common/autotest_common.sh@10 -- # set +x 00:02:45.788 18:01:04 -- spdk/autotest.sh@70 -- # create_test_list 00:02:45.788 18:01:04 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:45.788 18:01:04 -- common/autotest_common.sh@10 -- # set +x 00:02:45.788 18:01:04 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:45.788 18:01:04 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:45.788 18:01:04 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:45.788 18:01:04 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:45.788 18:01:04 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:45.788 18:01:04 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:45.788 18:01:04 -- common/autotest_common.sh@1450 -- # uname 00:02:45.788 18:01:04 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:45.788 18:01:04 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:45.788 18:01:04 -- common/autotest_common.sh@1470 -- # uname 00:02:45.788 18:01:04 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:45.788 18:01:04 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:45.788 18:01:04 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:45.788 lcov: LCOV version 1.15 00:02:45.788 18:01:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:55.774 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.774 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:55.774 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.774 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:55.774 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.774 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:17.709 18:01:33 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:17.709 18:01:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:17.709 18:01:33 -- common/autotest_common.sh@10 -- # set +x 00:03:17.709 18:01:33 -- spdk/autotest.sh@89 -- # rm -f 00:03:17.709 18:01:33 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:17.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:17.709 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:17.709 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:17.709 18:01:33 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:17.709 18:01:33 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:17.709 18:01:33 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:17.709 18:01:33 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:17.709 18:01:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.709 18:01:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:17.709 18:01:33 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:17.709 18:01:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.709 18:01:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:17.709 18:01:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:17.709 18:01:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.709 18:01:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:17.709 18:01:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:17.709 18:01:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:17.709 18:01:33 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:17.709 18:01:33 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:17.709 18:01:33 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:17.709 18:01:33 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:17.709 18:01:33 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:17.709 18:01:33 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:17.709 18:01:33 -- spdk/autotest.sh@108 -- # grep -v p 00:03:17.709 18:01:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.709 18:01:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.709 18:01:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:17.709 18:01:33 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:17.709 18:01:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.709 No valid GPT data, bailing 00:03:17.709 18:01:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.709 18:01:33 -- scripts/common.sh@393 -- # pt= 00:03:17.709 18:01:33 -- scripts/common.sh@394 -- # return 1 00:03:17.709 18:01:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.709 1+0 records in 00:03:17.709 1+0 records out 00:03:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490472 s, 214 MB/s 00:03:17.709 18:01:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.709 18:01:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.709 18:01:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:17.709 18:01:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:17.709 18:01:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:17.709 No valid GPT data, bailing 00:03:17.709 18:01:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:17.709 18:01:33 -- scripts/common.sh@393 -- # pt= 00:03:17.709 18:01:33 -- scripts/common.sh@394 -- # return 1 00:03:17.709 18:01:33 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:17.709 1+0 records in 00:03:17.709 1+0 records out 00:03:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00311767 s, 336 MB/s 00:03:17.709 18:01:33 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.709 18:01:33 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.709 18:01:33 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:17.709 18:01:33 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:17.709 18:01:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:17.709 No valid GPT data, bailing 00:03:17.709 18:01:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:17.709 18:01:34 -- scripts/common.sh@393 -- # pt= 00:03:17.709 18:01:34 -- scripts/common.sh@394 -- # return 1 00:03:17.709 18:01:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:17.709 1+0 records in 00:03:17.709 1+0 records out 00:03:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384737 s, 273 MB/s 00:03:17.709 18:01:34 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:17.709 18:01:34 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:17.709 18:01:34 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:17.709 18:01:34 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:17.709 18:01:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:17.709 No valid GPT data, bailing 00:03:17.709 18:01:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:17.709 18:01:34 -- scripts/common.sh@393 -- # pt= 00:03:17.709 18:01:34 -- scripts/common.sh@394 -- # return 1 00:03:17.709 18:01:34 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:17.709 1+0 records in 00:03:17.709 1+0 records out 00:03:17.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420408 s, 249 MB/s 00:03:17.709 18:01:34 -- spdk/autotest.sh@116 -- # sync 00:03:17.709 18:01:34 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:17.709 18:01:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:17.709 18:01:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:17.709 18:01:36 -- spdk/autotest.sh@122 -- # uname -s 00:03:17.709 18:01:36 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:17.709 18:01:36 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:17.709 18:01:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.709 18:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.709 18:01:36 -- common/autotest_common.sh@10 -- # set +x 00:03:17.710 ************************************ 00:03:17.710 START TEST setup.sh 00:03:17.710 ************************************ 00:03:17.710 18:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:17.710 * Looking for test storage... 00:03:17.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:17.710 18:01:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:17.710 18:01:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:17.710 18:01:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:17.969 18:01:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:17.969 18:01:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:17.969 18:01:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:17.969 18:01:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:17.969 18:01:36 -- scripts/common.sh@335 -- # IFS=.-: 00:03:17.969 18:01:36 -- scripts/common.sh@335 -- # read -ra ver1 00:03:17.969 18:01:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:17.969 18:01:36 -- scripts/common.sh@336 -- # read -ra ver2 00:03:17.969 18:01:36 -- scripts/common.sh@337 -- # local 'op=<' 00:03:17.969 18:01:36 -- scripts/common.sh@339 -- # ver1_l=2 00:03:17.969 18:01:36 -- scripts/common.sh@340 -- # ver2_l=1 00:03:17.970 18:01:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:17.970 18:01:36 -- scripts/common.sh@343 -- # case "$op" in 00:03:17.970 18:01:36 -- scripts/common.sh@344 -- # : 1 00:03:17.970 18:01:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:17.970 18:01:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:17.970 18:01:36 -- scripts/common.sh@364 -- # decimal 1 00:03:17.970 18:01:36 -- scripts/common.sh@352 -- # local d=1 00:03:17.970 18:01:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:17.970 18:01:36 -- scripts/common.sh@354 -- # echo 1 00:03:17.970 18:01:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:17.970 18:01:36 -- scripts/common.sh@365 -- # decimal 2 00:03:17.970 18:01:36 -- scripts/common.sh@352 -- # local d=2 00:03:17.970 18:01:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:17.970 18:01:36 -- scripts/common.sh@354 -- # echo 2 00:03:17.970 18:01:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:17.970 18:01:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:17.970 18:01:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:17.970 18:01:36 -- scripts/common.sh@367 -- # return 0 00:03:17.970 18:01:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:17.970 18:01:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.970 --rc genhtml_branch_coverage=1 00:03:17.970 --rc genhtml_function_coverage=1 00:03:17.970 --rc genhtml_legend=1 00:03:17.970 --rc geninfo_all_blocks=1 00:03:17.970 --rc geninfo_unexecuted_blocks=1 00:03:17.970 00:03:17.970 ' 00:03:17.970 18:01:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.970 --rc genhtml_branch_coverage=1 00:03:17.970 --rc genhtml_function_coverage=1 00:03:17.970 --rc genhtml_legend=1 00:03:17.970 --rc geninfo_all_blocks=1 00:03:17.970 --rc geninfo_unexecuted_blocks=1 00:03:17.970 00:03:17.970 ' 00:03:17.970 18:01:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.970 --rc genhtml_branch_coverage=1 00:03:17.970 --rc genhtml_function_coverage=1 00:03:17.970 --rc genhtml_legend=1 00:03:17.970 --rc geninfo_all_blocks=1 00:03:17.970 --rc geninfo_unexecuted_blocks=1 00:03:17.970 00:03:17.970 ' 00:03:17.970 18:01:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:17.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:17.970 --rc genhtml_branch_coverage=1 00:03:17.970 --rc genhtml_function_coverage=1 00:03:17.970 --rc genhtml_legend=1 00:03:17.970 --rc geninfo_all_blocks=1 00:03:17.970 --rc geninfo_unexecuted_blocks=1 00:03:17.970 00:03:17.970 ' 00:03:17.970 18:01:36 -- setup/test-setup.sh@10 -- # uname -s 00:03:17.970 18:01:36 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:17.970 18:01:36 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:17.970 18:01:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.970 18:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.970 18:01:36 -- common/autotest_common.sh@10 -- # set +x 00:03:17.970 ************************************ 00:03:17.970 START TEST acl 00:03:17.970 ************************************ 00:03:17.970 18:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:17.970 * Looking for test storage... 00:03:17.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:17.970 18:01:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:17.970 18:01:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:17.970 18:01:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:18.230 18:01:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:18.230 18:01:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:18.230 18:01:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:18.230 18:01:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:18.230 18:01:36 -- scripts/common.sh@335 -- # IFS=.-: 00:03:18.230 18:01:36 -- scripts/common.sh@335 -- # read -ra ver1 00:03:18.230 18:01:36 -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.230 18:01:36 -- scripts/common.sh@336 -- # read -ra ver2 00:03:18.230 18:01:36 -- scripts/common.sh@337 -- # local 'op=<' 00:03:18.230 18:01:36 -- scripts/common.sh@339 -- # ver1_l=2 00:03:18.230 18:01:36 -- scripts/common.sh@340 -- # ver2_l=1 00:03:18.230 18:01:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:18.230 18:01:36 -- scripts/common.sh@343 -- # case "$op" in 00:03:18.230 18:01:36 -- scripts/common.sh@344 -- # : 1 00:03:18.230 18:01:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:18.230 18:01:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.230 18:01:36 -- scripts/common.sh@364 -- # decimal 1 00:03:18.230 18:01:36 -- scripts/common.sh@352 -- # local d=1 00:03:18.230 18:01:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.230 18:01:36 -- scripts/common.sh@354 -- # echo 1 00:03:18.230 18:01:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:18.230 18:01:36 -- scripts/common.sh@365 -- # decimal 2 00:03:18.230 18:01:36 -- scripts/common.sh@352 -- # local d=2 00:03:18.230 18:01:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.230 18:01:36 -- scripts/common.sh@354 -- # echo 2 00:03:18.230 18:01:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:18.230 18:01:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:18.230 18:01:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:18.230 18:01:36 -- scripts/common.sh@367 -- # return 0 00:03:18.230 18:01:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.230 18:01:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.230 --rc genhtml_branch_coverage=1 00:03:18.230 --rc genhtml_function_coverage=1 00:03:18.230 --rc genhtml_legend=1 00:03:18.230 --rc geninfo_all_blocks=1 00:03:18.230 --rc geninfo_unexecuted_blocks=1 00:03:18.230 00:03:18.230 ' 00:03:18.230 18:01:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.230 --rc genhtml_branch_coverage=1 00:03:18.230 --rc genhtml_function_coverage=1 00:03:18.230 --rc genhtml_legend=1 00:03:18.230 --rc geninfo_all_blocks=1 00:03:18.230 --rc geninfo_unexecuted_blocks=1 00:03:18.230 00:03:18.230 ' 00:03:18.230 18:01:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.230 --rc genhtml_branch_coverage=1 00:03:18.230 --rc genhtml_function_coverage=1 00:03:18.230 --rc genhtml_legend=1 00:03:18.230 --rc geninfo_all_blocks=1 00:03:18.230 --rc geninfo_unexecuted_blocks=1 00:03:18.230 00:03:18.230 ' 00:03:18.230 18:01:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:18.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.230 --rc genhtml_branch_coverage=1 00:03:18.230 --rc genhtml_function_coverage=1 00:03:18.230 --rc genhtml_legend=1 00:03:18.230 --rc geninfo_all_blocks=1 00:03:18.230 --rc geninfo_unexecuted_blocks=1 00:03:18.230 00:03:18.230 ' 00:03:18.230 18:01:36 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.230 18:01:36 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:18.230 18:01:36 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:18.230 18:01:36 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:18.230 18:01:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.230 18:01:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:18.230 18:01:36 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:18.230 18:01:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.230 18:01:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:18.230 18:01:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:18.230 18:01:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.230 18:01:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:18.230 18:01:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:18.230 18:01:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:18.230 18:01:36 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:18.230 18:01:36 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:18.230 18:01:36 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:18.230 18:01:36 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:18.230 18:01:36 -- setup/acl.sh@12 -- # devs=() 00:03:18.230 18:01:36 -- setup/acl.sh@12 -- # declare -a devs 00:03:18.230 18:01:36 -- setup/acl.sh@13 -- # drivers=() 00:03:18.230 18:01:36 -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.230 18:01:36 -- setup/acl.sh@51 -- # setup reset 00:03:18.230 18:01:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.230 18:01:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:18.798 18:01:37 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:18.798 18:01:37 -- setup/acl.sh@16 -- # local dev driver 00:03:18.798 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.798 18:01:37 -- setup/acl.sh@15 -- # setup output status 00:03:18.798 18:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.798 18:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:19.058 Hugepages 00:03:19.058 node hugesize free / total 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # continue 00:03:19.058 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.058 00:03:19.058 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # continue 00:03:19.058 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:19.058 18:01:37 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:19.058 18:01:37 -- setup/acl.sh@20 -- # continue 00:03:19.058 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.058 18:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:19.058 18:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:19.058 18:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:19.058 18:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:19.058 18:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:19.058 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.317 18:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:19.317 18:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:19.317 18:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:19.317 18:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:19.317 18:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:19.317 18:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.317 18:01:37 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:19.317 18:01:37 -- setup/acl.sh@54 -- # run_test denied denied 00:03:19.317 18:01:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.317 18:01:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.317 18:01:37 -- common/autotest_common.sh@10 -- # set +x 00:03:19.317 ************************************ 00:03:19.317 START TEST denied 00:03:19.317 ************************************ 00:03:19.317 18:01:37 -- common/autotest_common.sh@1114 -- # denied 00:03:19.317 18:01:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:19.317 18:01:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:19.317 18:01:37 -- setup/acl.sh@38 -- # setup output config 00:03:19.317 18:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.317 18:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:20.261 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:20.261 18:01:38 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:20.261 18:01:38 -- setup/acl.sh@28 -- # local dev driver 00:03:20.261 18:01:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.261 18:01:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:20.261 18:01:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:20.261 18:01:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.261 18:01:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.261 18:01:38 -- setup/acl.sh@41 -- # setup reset 00:03:20.261 18:01:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.261 18:01:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.831 00:03:20.831 real 0m1.442s 00:03:20.831 user 0m0.589s 00:03:20.831 sys 0m0.819s 00:03:20.831 18:01:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:20.831 ************************************ 00:03:20.831 END TEST denied 00:03:20.831 ************************************ 00:03:20.831 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:03:20.831 18:01:39 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:20.831 18:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.831 18:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.831 18:01:39 -- common/autotest_common.sh@10 -- # set +x 00:03:20.831 ************************************ 00:03:20.831 START TEST allowed 00:03:20.831 ************************************ 00:03:20.831 18:01:39 -- common/autotest_common.sh@1114 -- # allowed 00:03:20.831 18:01:39 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:20.831 18:01:39 -- setup/acl.sh@45 -- # setup output config 00:03:20.831 18:01:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.831 18:01:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:20.831 18:01:39 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:21.400 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:21.400 18:01:39 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:03:21.400 18:01:39 -- setup/acl.sh@28 -- # local dev driver 00:03:21.400 18:01:39 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:21.400 18:01:39 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:03:21.400 18:01:39 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:03:21.400 18:01:39 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:21.400 18:01:39 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:21.400 18:01:39 -- setup/acl.sh@48 -- # setup reset 00:03:21.400 18:01:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.400 18:01:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:22.338 00:03:22.338 real 0m1.454s 00:03:22.338 user 0m0.654s 00:03:22.338 sys 0m0.806s 00:03:22.338 18:01:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:22.338 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:03:22.338 ************************************ 00:03:22.338 END TEST allowed 00:03:22.338 ************************************ 00:03:22.338 00:03:22.338 real 0m4.241s 00:03:22.338 user 0m1.885s 00:03:22.338 sys 0m2.359s 00:03:22.338 18:01:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:22.338 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:03:22.338 ************************************ 00:03:22.338 END TEST acl 00:03:22.338 ************************************ 00:03:22.338 18:01:40 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:22.338 18:01:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.338 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:03:22.338 ************************************ 00:03:22.338 START TEST hugepages 00:03:22.338 ************************************ 00:03:22.338 18:01:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:22.338 * Looking for test storage... 00:03:22.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:22.338 18:01:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:22.338 18:01:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:22.338 18:01:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:22.338 18:01:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:22.338 18:01:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:22.338 18:01:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:22.338 18:01:40 -- scripts/common.sh@335 -- # IFS=.-: 00:03:22.338 18:01:40 -- scripts/common.sh@335 -- # read -ra ver1 00:03:22.338 18:01:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.338 18:01:40 -- scripts/common.sh@336 -- # read -ra ver2 00:03:22.338 18:01:40 -- scripts/common.sh@337 -- # local 'op=<' 00:03:22.338 18:01:40 -- scripts/common.sh@339 -- # ver1_l=2 00:03:22.338 18:01:40 -- scripts/common.sh@340 -- # ver2_l=1 00:03:22.338 18:01:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:22.338 18:01:40 -- scripts/common.sh@343 -- # case "$op" in 00:03:22.338 18:01:40 -- scripts/common.sh@344 -- # : 1 00:03:22.338 18:01:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:22.338 18:01:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.338 18:01:40 -- scripts/common.sh@364 -- # decimal 1 00:03:22.338 18:01:40 -- scripts/common.sh@352 -- # local d=1 00:03:22.338 18:01:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.338 18:01:40 -- scripts/common.sh@354 -- # echo 1 00:03:22.338 18:01:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:22.338 18:01:40 -- scripts/common.sh@365 -- # decimal 2 00:03:22.338 18:01:40 -- scripts/common.sh@352 -- # local d=2 00:03:22.338 18:01:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.338 18:01:40 -- scripts/common.sh@354 -- # echo 2 00:03:22.338 18:01:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:22.338 18:01:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:22.338 18:01:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:22.338 18:01:40 -- scripts/common.sh@367 -- # return 0 00:03:22.338 18:01:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:22.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.338 --rc genhtml_branch_coverage=1 00:03:22.338 --rc genhtml_function_coverage=1 00:03:22.338 --rc genhtml_legend=1 00:03:22.338 --rc geninfo_all_blocks=1 00:03:22.338 --rc geninfo_unexecuted_blocks=1 00:03:22.338 00:03:22.338 ' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:22.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.338 --rc genhtml_branch_coverage=1 00:03:22.338 --rc genhtml_function_coverage=1 00:03:22.338 --rc genhtml_legend=1 00:03:22.338 --rc geninfo_all_blocks=1 00:03:22.338 --rc geninfo_unexecuted_blocks=1 00:03:22.338 00:03:22.338 ' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:22.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.338 --rc genhtml_branch_coverage=1 00:03:22.338 --rc genhtml_function_coverage=1 00:03:22.338 --rc genhtml_legend=1 00:03:22.338 --rc geninfo_all_blocks=1 00:03:22.338 --rc geninfo_unexecuted_blocks=1 00:03:22.338 00:03:22.338 ' 00:03:22.338 18:01:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:22.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.339 --rc genhtml_branch_coverage=1 00:03:22.339 --rc genhtml_function_coverage=1 00:03:22.339 --rc genhtml_legend=1 00:03:22.339 --rc geninfo_all_blocks=1 00:03:22.339 --rc geninfo_unexecuted_blocks=1 00:03:22.339 00:03:22.339 ' 00:03:22.339 18:01:40 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:22.339 18:01:40 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:22.339 18:01:40 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:22.339 18:01:40 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:22.339 18:01:40 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:22.339 18:01:40 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:22.339 18:01:40 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:22.339 18:01:40 -- setup/common.sh@18 -- # local node= 00:03:22.339 18:01:40 -- setup/common.sh@19 -- # local var val 00:03:22.339 18:01:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.339 18:01:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.339 18:01:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.339 18:01:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.339 18:01:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.339 18:01:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 5968792 kB' 'MemAvailable: 7351920 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 455760 kB' 'Inactive: 1261204 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 119124 kB' 'Mapped: 50988 kB' 'Shmem: 10508 kB' 'KReclaimable: 62452 kB' 'Slab: 156024 kB' 'SReclaimable: 62452 kB' 'SUnreclaim: 93572 kB' 'KernelStack: 6576 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 320256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.339 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.339 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # continue 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.340 18:01:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.340 18:01:40 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:22.340 18:01:40 -- setup/common.sh@33 -- # echo 2048 00:03:22.340 18:01:40 -- setup/common.sh@33 -- # return 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:22.340 18:01:40 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:22.340 18:01:40 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:22.340 18:01:40 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:22.340 18:01:40 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:22.340 18:01:40 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:22.340 18:01:40 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:22.340 18:01:40 -- setup/hugepages.sh@207 -- # get_nodes 00:03:22.340 18:01:40 -- setup/hugepages.sh@27 -- # local node 00:03:22.340 18:01:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.340 18:01:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:22.340 18:01:40 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:22.340 18:01:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.340 18:01:40 -- setup/hugepages.sh@208 -- # clear_hp 00:03:22.340 18:01:40 -- setup/hugepages.sh@37 -- # local node hp 00:03:22.340 18:01:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.340 18:01:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 18:01:40 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.340 18:01:40 -- setup/hugepages.sh@41 -- # echo 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.340 18:01:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.340 18:01:40 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:22.340 18:01:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.340 18:01:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.340 18:01:40 -- common/autotest_common.sh@10 -- # set +x 00:03:22.340 ************************************ 00:03:22.340 START TEST default_setup 00:03:22.340 ************************************ 00:03:22.340 18:01:40 -- common/autotest_common.sh@1114 -- # default_setup 00:03:22.340 18:01:40 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.340 18:01:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:22.340 18:01:40 -- setup/hugepages.sh@51 -- # shift 00:03:22.340 18:01:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:22.340 18:01:40 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.340 18:01:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.340 18:01:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.340 18:01:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:22.340 18:01:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.340 18:01:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.340 18:01:40 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:22.340 18:01:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.340 18:01:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.340 18:01:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:22.340 18:01:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.340 18:01:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:22.340 18:01:40 -- setup/hugepages.sh@73 -- # return 0 00:03:22.340 18:01:40 -- setup/hugepages.sh@137 -- # setup output 00:03:22.340 18:01:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.340 18:01:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:23.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.282 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:23.282 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:23.282 18:01:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:23.282 18:01:41 -- setup/hugepages.sh@89 -- # local node 00:03:23.282 18:01:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.282 18:01:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.282 18:01:41 -- setup/hugepages.sh@92 -- # local surp 00:03:23.282 18:01:41 -- setup/hugepages.sh@93 -- # local resv 00:03:23.282 18:01:41 -- setup/hugepages.sh@94 -- # local anon 00:03:23.282 18:01:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.282 18:01:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.282 18:01:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.282 18:01:41 -- setup/common.sh@18 -- # local node= 00:03:23.282 18:01:41 -- setup/common.sh@19 -- # local var val 00:03:23.282 18:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.282 18:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.282 18:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.282 18:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.282 18:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.282 18:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8065640 kB' 'MemAvailable: 9448612 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 457152 kB' 'Inactive: 1261224 kB' 'Active(anon): 129304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120476 kB' 'Mapped: 50992 kB' 'Shmem: 10484 kB' 'KReclaimable: 62104 kB' 'Slab: 155784 kB' 'SReclaimable: 62104 kB' 'SUnreclaim: 93680 kB' 'KernelStack: 6576 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.282 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.282 18:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.283 18:01:41 -- setup/common.sh@33 -- # echo 0 00:03:23.283 18:01:41 -- setup/common.sh@33 -- # return 0 00:03:23.283 18:01:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.283 18:01:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.283 18:01:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.283 18:01:41 -- setup/common.sh@18 -- # local node= 00:03:23.283 18:01:41 -- setup/common.sh@19 -- # local var val 00:03:23.283 18:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.283 18:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.283 18:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.283 18:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.283 18:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.283 18:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8065648 kB' 'MemAvailable: 9448620 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 457080 kB' 'Inactive: 1261224 kB' 'Active(anon): 129232 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120264 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 62104 kB' 'Slab: 155776 kB' 'SReclaimable: 62104 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6528 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.283 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.283 18:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.284 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.284 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.285 18:01:41 -- setup/common.sh@33 -- # echo 0 00:03:23.285 18:01:41 -- setup/common.sh@33 -- # return 0 00:03:23.285 18:01:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.285 18:01:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.285 18:01:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.285 18:01:41 -- setup/common.sh@18 -- # local node= 00:03:23.285 18:01:41 -- setup/common.sh@19 -- # local var val 00:03:23.285 18:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.285 18:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.285 18:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.285 18:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.285 18:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.285 18:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8065648 kB' 'MemAvailable: 9448620 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456532 kB' 'Inactive: 1261224 kB' 'Active(anon): 128684 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119804 kB' 'Mapped: 50952 kB' 'Shmem: 10484 kB' 'KReclaimable: 62104 kB' 'Slab: 155772 kB' 'SReclaimable: 62104 kB' 'SUnreclaim: 93668 kB' 'KernelStack: 6496 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.285 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.285 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.286 18:01:41 -- setup/common.sh@33 -- # echo 0 00:03:23.286 18:01:41 -- setup/common.sh@33 -- # return 0 00:03:23.286 18:01:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.286 nr_hugepages=1024 00:03:23.286 18:01:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.286 resv_hugepages=0 00:03:23.286 18:01:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.286 surplus_hugepages=0 00:03:23.286 18:01:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.286 anon_hugepages=0 00:03:23.286 18:01:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.286 18:01:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.286 18:01:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.286 18:01:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.286 18:01:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.286 18:01:41 -- setup/common.sh@18 -- # local node= 00:03:23.286 18:01:41 -- setup/common.sh@19 -- # local var val 00:03:23.286 18:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.286 18:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.286 18:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.286 18:01:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.286 18:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.286 18:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8065396 kB' 'MemAvailable: 9448368 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 1261224 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119604 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62104 kB' 'Slab: 155776 kB' 'SReclaimable: 62104 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6496 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.286 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.286 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.287 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.287 18:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.288 18:01:41 -- setup/common.sh@33 -- # echo 1024 00:03:23.288 18:01:41 -- setup/common.sh@33 -- # return 0 00:03:23.288 18:01:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.288 18:01:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.288 18:01:41 -- setup/hugepages.sh@27 -- # local node 00:03:23.288 18:01:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.288 18:01:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.288 18:01:41 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:23.288 18:01:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.288 18:01:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.288 18:01:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.288 18:01:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.288 18:01:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.288 18:01:41 -- setup/common.sh@18 -- # local node=0 00:03:23.288 18:01:41 -- setup/common.sh@19 -- # local var val 00:03:23.288 18:01:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.288 18:01:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.288 18:01:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.288 18:01:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.288 18:01:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.288 18:01:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8066580 kB' 'MemUsed: 4172524 kB' 'SwapCached: 0 kB' 'Active: 456568 kB' 'Inactive: 1261224 kB' 'Active(anon): 128720 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1599556 kB' 'Mapped: 50824 kB' 'AnonPages: 119864 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62104 kB' 'Slab: 155776 kB' 'SReclaimable: 62104 kB' 'SUnreclaim: 93672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.288 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.288 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # continue 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.289 18:01:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.289 18:01:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.289 18:01:41 -- setup/common.sh@33 -- # echo 0 00:03:23.289 18:01:41 -- setup/common.sh@33 -- # return 0 00:03:23.289 18:01:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.289 18:01:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.289 18:01:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.289 18:01:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.289 node0=1024 expecting 1024 00:03:23.289 18:01:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.289 18:01:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.289 00:03:23.289 real 0m0.931s 00:03:23.289 user 0m0.454s 00:03:23.289 sys 0m0.439s 00:03:23.289 18:01:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:23.289 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:03:23.289 ************************************ 00:03:23.289 END TEST default_setup 00:03:23.289 ************************************ 00:03:23.548 18:01:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:23.548 18:01:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.548 18:01:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.548 18:01:41 -- common/autotest_common.sh@10 -- # set +x 00:03:23.548 ************************************ 00:03:23.548 START TEST per_node_1G_alloc 00:03:23.548 ************************************ 00:03:23.548 18:01:41 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:23.548 18:01:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:23.548 18:01:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:23.548 18:01:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.548 18:01:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:23.548 18:01:41 -- setup/hugepages.sh@51 -- # shift 00:03:23.548 18:01:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:23.548 18:01:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.548 18:01:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.548 18:01:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.548 18:01:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:23.548 18:01:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:23.548 18:01:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.548 18:01:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.548 18:01:41 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:23.548 18:01:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.548 18:01:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.548 18:01:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:23.548 18:01:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.548 18:01:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.548 18:01:41 -- setup/hugepages.sh@73 -- # return 0 00:03:23.548 18:01:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:23.548 18:01:41 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:23.548 18:01:41 -- setup/hugepages.sh@146 -- # setup output 00:03:23.548 18:01:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.548 18:01:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:23.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:23.810 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.810 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:23.810 18:01:42 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:23.810 18:01:42 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:23.810 18:01:42 -- setup/hugepages.sh@89 -- # local node 00:03:23.810 18:01:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.810 18:01:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.810 18:01:42 -- setup/hugepages.sh@92 -- # local surp 00:03:23.810 18:01:42 -- setup/hugepages.sh@93 -- # local resv 00:03:23.810 18:01:42 -- setup/hugepages.sh@94 -- # local anon 00:03:23.810 18:01:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.810 18:01:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.810 18:01:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.810 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:23.810 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:23.810 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.810 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.810 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.810 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.810 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.810 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.810 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.810 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9120284 kB' 'MemAvailable: 10503256 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 457200 kB' 'Inactive: 1261224 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 120440 kB' 'Mapped: 51120 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155784 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93684 kB' 'KernelStack: 6560 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.811 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.811 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.812 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:23.812 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:23.812 18:01:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:23.812 18:01:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.812 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.812 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:23.812 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:23.812 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.812 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.812 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.812 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.812 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.812 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9120284 kB' 'MemAvailable: 10503256 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 457024 kB' 'Inactive: 1261224 kB' 'Active(anon): 129176 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 120264 kB' 'Mapped: 50928 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155780 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93680 kB' 'KernelStack: 6500 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.812 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.812 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.813 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:23.813 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:23.813 18:01:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:23.813 18:01:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.813 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.813 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:23.813 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:23.813 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.813 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.813 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.813 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.813 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.813 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9120032 kB' 'MemAvailable: 10503004 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456584 kB' 'Inactive: 1261224 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155776 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93676 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.813 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.813 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.814 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.814 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.814 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:23.814 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:23.814 18:01:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:23.814 nr_hugepages=512 00:03:23.815 18:01:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:23.815 resv_hugepages=0 00:03:23.815 18:01:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.815 surplus_hugepages=0 00:03:23.815 18:01:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.815 anon_hugepages=0 00:03:23.815 18:01:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.815 18:01:42 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:23.815 18:01:42 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:23.815 18:01:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.815 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.815 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:23.815 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:23.815 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:23.815 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.815 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.815 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.815 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.815 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9119784 kB' 'MemAvailable: 10502756 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456664 kB' 'Inactive: 1261224 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 119968 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155772 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6528 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.815 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.815 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # continue 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.816 18:01:42 -- setup/common.sh@33 -- # echo 512 00:03:23.816 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:23.816 18:01:42 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:23.816 18:01:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.816 18:01:42 -- setup/hugepages.sh@27 -- # local node 00:03:23.816 18:01:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.816 18:01:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.816 18:01:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:23.816 18:01:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.076 18:01:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.076 18:01:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.076 18:01:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.076 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.076 18:01:42 -- setup/common.sh@18 -- # local node=0 00:03:24.076 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.076 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.076 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.076 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.076 18:01:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.076 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.076 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9119784 kB' 'MemUsed: 3119320 kB' 'SwapCached: 0 kB' 'Active: 456496 kB' 'Inactive: 1261224 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'FilePages: 1599556 kB' 'Mapped: 50824 kB' 'AnonPages: 120036 kB' 'Shmem: 10484 kB' 'KernelStack: 6544 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 155768 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.076 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.076 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.077 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.077 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.077 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:24.077 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.077 18:01:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.077 18:01:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.077 18:01:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.077 node0=512 expecting 512 00:03:24.077 18:01:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.077 18:01:42 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.077 00:03:24.077 real 0m0.518s 00:03:24.077 user 0m0.266s 00:03:24.077 sys 0m0.286s 00:03:24.077 18:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.077 18:01:42 -- common/autotest_common.sh@10 -- # set +x 00:03:24.077 ************************************ 00:03:24.077 END TEST per_node_1G_alloc 00:03:24.077 ************************************ 00:03:24.077 18:01:42 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.077 18:01:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.077 18:01:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.077 18:01:42 -- common/autotest_common.sh@10 -- # set +x 00:03:24.077 ************************************ 00:03:24.077 START TEST even_2G_alloc 00:03:24.077 ************************************ 00:03:24.077 18:01:42 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:24.077 18:01:42 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.077 18:01:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.077 18:01:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.077 18:01:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.077 18:01:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.077 18:01:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.077 18:01:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.077 18:01:42 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:24.077 18:01:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.077 18:01:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.077 18:01:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:24.077 18:01:42 -- setup/hugepages.sh@83 -- # : 0 00:03:24.077 18:01:42 -- setup/hugepages.sh@84 -- # : 0 00:03:24.077 18:01:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.077 18:01:42 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.077 18:01:42 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.077 18:01:42 -- setup/hugepages.sh@153 -- # setup output 00:03:24.077 18:01:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.077 18:01:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:24.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:24.339 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.339 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.339 18:01:42 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:24.339 18:01:42 -- setup/hugepages.sh@89 -- # local node 00:03:24.339 18:01:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.339 18:01:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.339 18:01:42 -- setup/hugepages.sh@92 -- # local surp 00:03:24.339 18:01:42 -- setup/hugepages.sh@93 -- # local resv 00:03:24.339 18:01:42 -- setup/hugepages.sh@94 -- # local anon 00:03:24.339 18:01:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.339 18:01:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.339 18:01:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.339 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:24.339 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.339 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.339 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.339 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.339 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.339 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.339 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8082068 kB' 'MemAvailable: 9465040 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456812 kB' 'Inactive: 1261224 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 51020 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155772 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6560 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.339 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.339 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.340 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:24.340 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.340 18:01:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.340 18:01:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.340 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.340 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:24.340 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.340 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.340 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.340 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.340 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.340 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.340 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8082460 kB' 'MemAvailable: 9465432 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 1261224 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 51020 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155772 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93672 kB' 'KernelStack: 6592 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.340 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.340 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.341 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.341 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.342 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:24.342 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.342 18:01:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.342 18:01:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.342 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.342 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:24.342 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.342 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.342 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.342 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.342 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.342 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.342 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8082032 kB' 'MemAvailable: 9465004 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456408 kB' 'Inactive: 1261224 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155768 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93668 kB' 'KernelStack: 6576 kB' 'PageTables: 4660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.342 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.342 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.343 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.343 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.604 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.604 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.604 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:24.604 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.604 18:01:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.604 nr_hugepages=1024 00:03:24.604 18:01:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.604 resv_hugepages=0 00:03:24.604 18:01:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.604 surplus_hugepages=0 00:03:24.604 18:01:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.604 anon_hugepages=0 00:03:24.604 18:01:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.604 18:01:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.604 18:01:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.605 18:01:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.605 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.605 18:01:42 -- setup/common.sh@18 -- # local node= 00:03:24.605 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.605 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.605 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.605 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.605 18:01:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.605 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.605 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8082032 kB' 'MemAvailable: 9465004 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456448 kB' 'Inactive: 1261224 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50892 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155768 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93668 kB' 'KernelStack: 6528 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.605 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.605 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.606 18:01:42 -- setup/common.sh@33 -- # echo 1024 00:03:24.606 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.606 18:01:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.606 18:01:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.606 18:01:42 -- setup/hugepages.sh@27 -- # local node 00:03:24.606 18:01:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.606 18:01:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.606 18:01:42 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:24.606 18:01:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.606 18:01:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.606 18:01:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.606 18:01:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.606 18:01:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.606 18:01:42 -- setup/common.sh@18 -- # local node=0 00:03:24.606 18:01:42 -- setup/common.sh@19 -- # local var val 00:03:24.606 18:01:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.606 18:01:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.606 18:01:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.606 18:01:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.606 18:01:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.606 18:01:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8082032 kB' 'MemUsed: 4157072 kB' 'SwapCached: 0 kB' 'Active: 456708 kB' 'Inactive: 1261224 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'FilePages: 1599556 kB' 'Mapped: 50892 kB' 'AnonPages: 119976 kB' 'Shmem: 10484 kB' 'KernelStack: 6596 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 155768 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.606 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.606 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # continue 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.607 18:01:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.607 18:01:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.607 18:01:42 -- setup/common.sh@33 -- # echo 0 00:03:24.607 18:01:42 -- setup/common.sh@33 -- # return 0 00:03:24.607 18:01:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.607 18:01:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.607 18:01:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.607 18:01:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.607 node0=1024 expecting 1024 00:03:24.607 18:01:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.607 18:01:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.607 00:03:24.607 real 0m0.516s 00:03:24.607 user 0m0.242s 00:03:24.607 sys 0m0.307s 00:03:24.607 18:01:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.607 18:01:42 -- common/autotest_common.sh@10 -- # set +x 00:03:24.607 ************************************ 00:03:24.607 END TEST even_2G_alloc 00:03:24.607 ************************************ 00:03:24.607 18:01:43 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:24.607 18:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.607 18:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.607 18:01:43 -- common/autotest_common.sh@10 -- # set +x 00:03:24.607 ************************************ 00:03:24.607 START TEST odd_alloc 00:03:24.607 ************************************ 00:03:24.607 18:01:43 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:24.607 18:01:43 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:24.607 18:01:43 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:24.607 18:01:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:24.607 18:01:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.607 18:01:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.607 18:01:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.607 18:01:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:24.607 18:01:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:24.607 18:01:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.607 18:01:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.607 18:01:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:24.607 18:01:43 -- setup/hugepages.sh@83 -- # : 0 00:03:24.607 18:01:43 -- setup/hugepages.sh@84 -- # : 0 00:03:24.607 18:01:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.607 18:01:43 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:24.607 18:01:43 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:24.607 18:01:43 -- setup/hugepages.sh@160 -- # setup output 00:03:24.607 18:01:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.607 18:01:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:24.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:24.868 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.868 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:24.868 18:01:43 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:24.868 18:01:43 -- setup/hugepages.sh@89 -- # local node 00:03:24.868 18:01:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.868 18:01:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.868 18:01:43 -- setup/hugepages.sh@92 -- # local surp 00:03:24.868 18:01:43 -- setup/hugepages.sh@93 -- # local resv 00:03:24.868 18:01:43 -- setup/hugepages.sh@94 -- # local anon 00:03:24.868 18:01:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.868 18:01:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.868 18:01:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.868 18:01:43 -- setup/common.sh@18 -- # local node= 00:03:24.868 18:01:43 -- setup/common.sh@19 -- # local var val 00:03:24.868 18:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.868 18:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.868 18:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.868 18:01:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.868 18:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.868 18:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8080632 kB' 'MemAvailable: 9463604 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456892 kB' 'Inactive: 1261224 kB' 'Active(anon): 129044 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 120180 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155728 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93628 kB' 'KernelStack: 6520 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.868 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.868 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.869 18:01:43 -- setup/common.sh@33 -- # echo 0 00:03:24.869 18:01:43 -- setup/common.sh@33 -- # return 0 00:03:24.869 18:01:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.869 18:01:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.869 18:01:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.869 18:01:43 -- setup/common.sh@18 -- # local node= 00:03:24.869 18:01:43 -- setup/common.sh@19 -- # local var val 00:03:24.869 18:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.869 18:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.869 18:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.869 18:01:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.869 18:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.869 18:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8080632 kB' 'MemAvailable: 9463604 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456892 kB' 'Inactive: 1261224 kB' 'Active(anon): 129044 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 120180 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155728 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93628 kB' 'KernelStack: 6520 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.869 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.869 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.870 18:01:43 -- setup/common.sh@32 -- # continue 00:03:24.870 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.133 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.133 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.134 18:01:43 -- setup/common.sh@33 -- # echo 0 00:03:25.134 18:01:43 -- setup/common.sh@33 -- # return 0 00:03:25.134 18:01:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.134 18:01:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.134 18:01:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.134 18:01:43 -- setup/common.sh@18 -- # local node= 00:03:25.134 18:01:43 -- setup/common.sh@19 -- # local var val 00:03:25.134 18:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.134 18:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.134 18:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.134 18:01:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.134 18:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.134 18:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8080632 kB' 'MemAvailable: 9463604 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456820 kB' 'Inactive: 1261224 kB' 'Active(anon): 128972 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120128 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155740 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93640 kB' 'KernelStack: 6512 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55240 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.134 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.134 18:01:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.135 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.135 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.135 18:01:43 -- setup/common.sh@33 -- # echo 0 00:03:25.135 18:01:43 -- setup/common.sh@33 -- # return 0 00:03:25.135 18:01:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.135 nr_hugepages=1025 00:03:25.135 18:01:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.135 resv_hugepages=0 00:03:25.135 18:01:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.135 surplus_hugepages=0 00:03:25.135 18:01:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.135 anon_hugepages=0 00:03:25.135 18:01:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.135 18:01:43 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.135 18:01:43 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.135 18:01:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.135 18:01:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.135 18:01:43 -- setup/common.sh@18 -- # local node= 00:03:25.135 18:01:43 -- setup/common.sh@19 -- # local var val 00:03:25.135 18:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.136 18:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.136 18:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.136 18:01:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.136 18:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.136 18:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8080632 kB' 'MemAvailable: 9463604 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456856 kB' 'Inactive: 1261224 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155736 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93636 kB' 'KernelStack: 6480 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.136 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.136 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.137 18:01:43 -- setup/common.sh@33 -- # echo 1025 00:03:25.137 18:01:43 -- setup/common.sh@33 -- # return 0 00:03:25.137 18:01:43 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.137 18:01:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.137 18:01:43 -- setup/hugepages.sh@27 -- # local node 00:03:25.137 18:01:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.137 18:01:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:25.137 18:01:43 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:25.137 18:01:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.137 18:01:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.137 18:01:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.137 18:01:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.137 18:01:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.137 18:01:43 -- setup/common.sh@18 -- # local node=0 00:03:25.137 18:01:43 -- setup/common.sh@19 -- # local var val 00:03:25.137 18:01:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.137 18:01:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.137 18:01:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.137 18:01:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.137 18:01:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.137 18:01:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8080632 kB' 'MemUsed: 4158472 kB' 'SwapCached: 0 kB' 'Active: 456808 kB' 'Inactive: 1261224 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1599556 kB' 'Mapped: 50824 kB' 'AnonPages: 120060 kB' 'Shmem: 10484 kB' 'KernelStack: 6480 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 155736 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.137 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.137 18:01:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # continue 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.138 18:01:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.138 18:01:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.138 18:01:43 -- setup/common.sh@33 -- # echo 0 00:03:25.138 18:01:43 -- setup/common.sh@33 -- # return 0 00:03:25.138 18:01:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.138 18:01:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.138 18:01:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.138 node0=1025 expecting 1025 00:03:25.138 18:01:43 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:25.138 18:01:43 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:25.138 00:03:25.138 real 0m0.529s 00:03:25.138 user 0m0.253s 00:03:25.138 sys 0m0.312s 00:03:25.138 18:01:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:25.138 18:01:43 -- common/autotest_common.sh@10 -- # set +x 00:03:25.138 ************************************ 00:03:25.138 END TEST odd_alloc 00:03:25.138 ************************************ 00:03:25.138 18:01:43 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.138 18:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.138 18:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.138 18:01:43 -- common/autotest_common.sh@10 -- # set +x 00:03:25.138 ************************************ 00:03:25.138 START TEST custom_alloc 00:03:25.138 ************************************ 00:03:25.138 18:01:43 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:25.138 18:01:43 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.138 18:01:43 -- setup/hugepages.sh@169 -- # local node 00:03:25.138 18:01:43 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.138 18:01:43 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.138 18:01:43 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.138 18:01:43 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.138 18:01:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.138 18:01:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.138 18:01:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.138 18:01:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.138 18:01:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.138 18:01:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.138 18:01:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:25.138 18:01:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.138 18:01:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.138 18:01:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.138 18:01:43 -- setup/hugepages.sh@83 -- # : 0 00:03:25.138 18:01:43 -- setup/hugepages.sh@84 -- # : 0 00:03:25.138 18:01:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.138 18:01:43 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.138 18:01:43 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.138 18:01:43 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.138 18:01:43 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.138 18:01:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.138 18:01:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.138 18:01:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.138 18:01:43 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:25.139 18:01:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.139 18:01:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.139 18:01:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.139 18:01:43 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.139 18:01:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.139 18:01:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.139 18:01:43 -- setup/hugepages.sh@78 -- # return 0 00:03:25.139 18:01:43 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:25.139 18:01:43 -- setup/hugepages.sh@187 -- # setup output 00:03:25.139 18:01:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.139 18:01:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.423 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:25.423 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:25.423 18:01:43 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:25.423 18:01:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:25.423 18:01:44 -- setup/hugepages.sh@89 -- # local node 00:03:25.423 18:01:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.423 18:01:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.423 18:01:44 -- setup/hugepages.sh@92 -- # local surp 00:03:25.423 18:01:44 -- setup/hugepages.sh@93 -- # local resv 00:03:25.423 18:01:44 -- setup/hugepages.sh@94 -- # local anon 00:03:25.423 18:01:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.423 18:01:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.423 18:01:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.423 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:25.423 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:25.423 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.423 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.423 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.423 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.423 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.423 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.423 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.423 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.423 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9131808 kB' 'MemAvailable: 10514780 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456912 kB' 'Inactive: 1261224 kB' 'Active(anon): 129064 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120196 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155740 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93640 kB' 'KernelStack: 6520 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55256 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.423 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.423 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.423 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.423 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.423 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.424 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.424 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.701 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.701 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.702 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:25.702 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:25.702 18:01:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.702 18:01:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.702 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.702 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:25.702 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:25.702 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.702 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.702 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.702 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.702 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.702 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9131808 kB' 'MemAvailable: 10514780 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 1261224 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155748 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93648 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.702 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.702 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.703 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.703 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.704 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:25.704 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:25.704 18:01:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.704 18:01:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.704 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.704 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:25.704 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:25.704 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.704 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.704 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.704 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.704 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.704 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9131808 kB' 'MemAvailable: 10514780 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456584 kB' 'Inactive: 1261224 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155744 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93644 kB' 'KernelStack: 6496 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.704 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.704 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.705 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.705 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.706 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:25.706 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:25.706 18:01:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.706 nr_hugepages=512 00:03:25.706 18:01:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:25.706 resv_hugepages=0 00:03:25.706 18:01:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.706 surplus_hugepages=0 00:03:25.706 18:01:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.706 anon_hugepages=0 00:03:25.706 18:01:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.706 18:01:44 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:25.706 18:01:44 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:25.706 18:01:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.706 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.706 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:25.706 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:25.706 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.706 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.706 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.706 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.706 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.706 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9131808 kB' 'MemAvailable: 10514780 kB' 'Buffers: 3704 kB' 'Cached: 1595852 kB' 'SwapCached: 0 kB' 'Active: 456368 kB' 'Inactive: 1261224 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119888 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155736 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93636 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 321368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55224 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.706 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.706 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.707 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.707 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.708 18:01:44 -- setup/common.sh@33 -- # echo 512 00:03:25.708 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:25.708 18:01:44 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:25.708 18:01:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.708 18:01:44 -- setup/hugepages.sh@27 -- # local node 00:03:25.708 18:01:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.708 18:01:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.708 18:01:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:25.708 18:01:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.708 18:01:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.708 18:01:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.708 18:01:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.708 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.708 18:01:44 -- setup/common.sh@18 -- # local node=0 00:03:25.708 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:25.708 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.708 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.708 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.708 18:01:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.708 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.708 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 9131808 kB' 'MemUsed: 3107296 kB' 'SwapCached: 0 kB' 'Active: 456864 kB' 'Inactive: 1261224 kB' 'Active(anon): 129016 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1599556 kB' 'Mapped: 50824 kB' 'AnonPages: 120128 kB' 'Shmem: 10484 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 155728 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.708 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.708 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # continue 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.709 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.709 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.709 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:25.709 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:25.709 18:01:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.709 18:01:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.709 18:01:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.709 18:01:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.709 node0=512 expecting 512 00:03:25.709 18:01:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.709 18:01:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.709 00:03:25.709 real 0m0.548s 00:03:25.709 user 0m0.257s 00:03:25.709 sys 0m0.300s 00:03:25.709 18:01:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:25.709 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:03:25.709 ************************************ 00:03:25.709 END TEST custom_alloc 00:03:25.709 ************************************ 00:03:25.709 18:01:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:25.709 18:01:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:25.709 18:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:25.709 18:01:44 -- common/autotest_common.sh@10 -- # set +x 00:03:25.709 ************************************ 00:03:25.709 START TEST no_shrink_alloc 00:03:25.709 ************************************ 00:03:25.709 18:01:44 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:25.709 18:01:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:25.709 18:01:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.709 18:01:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.709 18:01:44 -- setup/hugepages.sh@51 -- # shift 00:03:25.709 18:01:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.709 18:01:44 -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.709 18:01:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.709 18:01:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.709 18:01:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.709 18:01:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.709 18:01:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.709 18:01:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.709 18:01:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:25.709 18:01:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.709 18:01:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.709 18:01:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.709 18:01:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.709 18:01:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.709 18:01:44 -- setup/hugepages.sh@73 -- # return 0 00:03:25.709 18:01:44 -- setup/hugepages.sh@198 -- # setup output 00:03:25.709 18:01:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.709 18:01:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.984 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:25.984 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.248 18:01:44 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:26.248 18:01:44 -- setup/hugepages.sh@89 -- # local node 00:03:26.248 18:01:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.248 18:01:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.248 18:01:44 -- setup/hugepages.sh@92 -- # local surp 00:03:26.248 18:01:44 -- setup/hugepages.sh@93 -- # local resv 00:03:26.248 18:01:44 -- setup/hugepages.sh@94 -- # local anon 00:03:26.248 18:01:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.248 18:01:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.248 18:01:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.248 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:26.248 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:26.248 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.248 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.248 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.248 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.248 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.248 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8083544 kB' 'MemAvailable: 9466520 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 457156 kB' 'Inactive: 1261228 kB' 'Active(anon): 129308 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 50976 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155712 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93612 kB' 'KernelStack: 6504 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.248 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.248 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.249 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:26.249 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:26.249 18:01:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.249 18:01:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.249 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.249 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:26.249 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:26.249 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.249 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.249 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.249 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.249 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.249 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8083296 kB' 'MemAvailable: 9466272 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 456724 kB' 'Inactive: 1261228 kB' 'Active(anon): 128876 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120000 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155732 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93632 kB' 'KernelStack: 6528 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.249 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.249 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.250 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.250 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.250 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:26.250 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:26.250 18:01:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.250 18:01:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.250 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.251 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:26.251 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:26.251 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.251 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.251 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.251 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.251 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.251 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8083316 kB' 'MemAvailable: 9466292 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 456416 kB' 'Inactive: 1261228 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155732 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93632 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.251 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.251 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.252 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:26.252 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:26.252 18:01:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.252 18:01:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.252 nr_hugepages=1024 00:03:26.252 18:01:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.252 resv_hugepages=0 00:03:26.252 18:01:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.252 surplus_hugepages=0 00:03:26.252 18:01:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.252 anon_hugepages=0 00:03:26.252 18:01:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.252 18:01:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.252 18:01:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.252 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.252 18:01:44 -- setup/common.sh@18 -- # local node= 00:03:26.252 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:26.252 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.252 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.252 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.252 18:01:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.252 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.252 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8083316 kB' 'MemAvailable: 9466292 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 456664 kB' 'Inactive: 1261228 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 50824 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155720 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93620 kB' 'KernelStack: 6512 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.252 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.252 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.253 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.253 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.254 18:01:44 -- setup/common.sh@33 -- # echo 1024 00:03:26.254 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:26.254 18:01:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.254 18:01:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.254 18:01:44 -- setup/hugepages.sh@27 -- # local node 00:03:26.254 18:01:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.254 18:01:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.254 18:01:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:26.254 18:01:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.254 18:01:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.254 18:01:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.254 18:01:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.254 18:01:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.254 18:01:44 -- setup/common.sh@18 -- # local node=0 00:03:26.254 18:01:44 -- setup/common.sh@19 -- # local var val 00:03:26.254 18:01:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.254 18:01:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.254 18:01:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.254 18:01:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.254 18:01:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.254 18:01:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8083676 kB' 'MemUsed: 4155428 kB' 'SwapCached: 0 kB' 'Active: 456748 kB' 'Inactive: 1261228 kB' 'Active(anon): 128900 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1599560 kB' 'Mapped: 50824 kB' 'AnonPages: 120012 kB' 'Shmem: 10484 kB' 'KernelStack: 6528 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62100 kB' 'Slab: 155708 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.254 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.254 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # continue 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.255 18:01:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.255 18:01:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.255 18:01:44 -- setup/common.sh@33 -- # echo 0 00:03:26.255 18:01:44 -- setup/common.sh@33 -- # return 0 00:03:26.255 18:01:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.255 18:01:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.255 18:01:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.255 node0=1024 expecting 1024 00:03:26.255 18:01:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.255 18:01:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.255 18:01:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.255 18:01:44 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:26.255 18:01:44 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:26.255 18:01:44 -- setup/hugepages.sh@202 -- # setup output 00:03:26.255 18:01:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.255 18:01:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:26.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.778 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.778 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.778 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:26.778 18:01:45 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:26.778 18:01:45 -- setup/hugepages.sh@89 -- # local node 00:03:26.778 18:01:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.778 18:01:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.778 18:01:45 -- setup/hugepages.sh@92 -- # local surp 00:03:26.778 18:01:45 -- setup/hugepages.sh@93 -- # local resv 00:03:26.778 18:01:45 -- setup/hugepages.sh@94 -- # local anon 00:03:26.778 18:01:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.778 18:01:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.778 18:01:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.778 18:01:45 -- setup/common.sh@18 -- # local node= 00:03:26.778 18:01:45 -- setup/common.sh@19 -- # local var val 00:03:26.778 18:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.778 18:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.778 18:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.778 18:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.778 18:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.778 18:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8088292 kB' 'MemAvailable: 9471268 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 456840 kB' 'Inactive: 1261228 kB' 'Active(anon): 128992 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120160 kB' 'Mapped: 51060 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155708 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93608 kB' 'KernelStack: 6528 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.778 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.778 18:01:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.779 18:01:45 -- setup/common.sh@33 -- # echo 0 00:03:26.779 18:01:45 -- setup/common.sh@33 -- # return 0 00:03:26.779 18:01:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.779 18:01:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.779 18:01:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.779 18:01:45 -- setup/common.sh@18 -- # local node= 00:03:26.779 18:01:45 -- setup/common.sh@19 -- # local var val 00:03:26.779 18:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.779 18:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.779 18:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.779 18:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.779 18:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.779 18:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8088044 kB' 'MemAvailable: 9471020 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 456904 kB' 'Inactive: 1261228 kB' 'Active(anon): 129056 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120196 kB' 'Mapped: 50880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155716 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93616 kB' 'KernelStack: 6536 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 321568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55272 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.779 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.779 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.780 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.780 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.781 18:01:45 -- setup/common.sh@33 -- # echo 0 00:03:26.781 18:01:45 -- setup/common.sh@33 -- # return 0 00:03:26.781 18:01:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.781 18:01:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.781 18:01:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.781 18:01:45 -- setup/common.sh@18 -- # local node= 00:03:26.781 18:01:45 -- setup/common.sh@19 -- # local var val 00:03:26.781 18:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.781 18:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.781 18:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.781 18:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.781 18:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.781 18:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8088296 kB' 'MemAvailable: 9471272 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 453916 kB' 'Inactive: 1261228 kB' 'Active(anon): 126068 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117212 kB' 'Mapped: 50088 kB' 'Shmem: 10484 kB' 'KReclaimable: 62100 kB' 'Slab: 155612 kB' 'SReclaimable: 62100 kB' 'SUnreclaim: 93512 kB' 'KernelStack: 6440 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.781 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.781 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.782 18:01:45 -- setup/common.sh@33 -- # echo 0 00:03:26.782 18:01:45 -- setup/common.sh@33 -- # return 0 00:03:26.782 18:01:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.782 18:01:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.782 nr_hugepages=1024 00:03:26.782 18:01:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.782 resv_hugepages=0 00:03:26.782 surplus_hugepages=0 00:03:26.782 anon_hugepages=0 00:03:26.782 18:01:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.782 18:01:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.782 18:01:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.782 18:01:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.782 18:01:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.782 18:01:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.782 18:01:45 -- setup/common.sh@18 -- # local node= 00:03:26.782 18:01:45 -- setup/common.sh@19 -- # local var val 00:03:26.782 18:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.782 18:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.782 18:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.782 18:01:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.782 18:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.782 18:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8088296 kB' 'MemAvailable: 9471256 kB' 'Buffers: 3704 kB' 'Cached: 1595856 kB' 'SwapCached: 0 kB' 'Active: 453904 kB' 'Inactive: 1261228 kB' 'Active(anon): 126056 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117192 kB' 'Mapped: 50084 kB' 'Shmem: 10484 kB' 'KReclaimable: 62072 kB' 'Slab: 155528 kB' 'SReclaimable: 62072 kB' 'SUnreclaim: 93456 kB' 'KernelStack: 6424 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 303108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 4024320 kB' 'DirectMap1G: 10485760 kB' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.782 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.782 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.783 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.783 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.784 18:01:45 -- setup/common.sh@33 -- # echo 1024 00:03:26.784 18:01:45 -- setup/common.sh@33 -- # return 0 00:03:26.784 18:01:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.784 18:01:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.784 18:01:45 -- setup/hugepages.sh@27 -- # local node 00:03:26.784 18:01:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.784 18:01:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.784 18:01:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:26.784 18:01:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.784 18:01:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.784 18:01:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.784 18:01:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.784 18:01:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.784 18:01:45 -- setup/common.sh@18 -- # local node=0 00:03:26.784 18:01:45 -- setup/common.sh@19 -- # local var val 00:03:26.784 18:01:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.784 18:01:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.784 18:01:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.784 18:01:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.784 18:01:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.784 18:01:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 8088956 kB' 'MemUsed: 4150148 kB' 'SwapCached: 0 kB' 'Active: 453876 kB' 'Inactive: 1261228 kB' 'Active(anon): 126028 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1261228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1599560 kB' 'Mapped: 50084 kB' 'AnonPages: 117192 kB' 'Shmem: 10484 kB' 'KernelStack: 6424 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62072 kB' 'Slab: 155528 kB' 'SReclaimable: 62072 kB' 'SUnreclaim: 93456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.784 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.784 18:01:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # continue 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.785 18:01:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.785 18:01:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.785 18:01:45 -- setup/common.sh@33 -- # echo 0 00:03:26.785 18:01:45 -- setup/common.sh@33 -- # return 0 00:03:26.785 18:01:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.785 node0=1024 expecting 1024 00:03:26.785 18:01:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.785 18:01:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.785 18:01:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.785 18:01:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.785 18:01:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.785 00:03:26.785 real 0m1.115s 00:03:26.785 user 0m0.561s 00:03:26.785 sys 0m0.554s 00:03:26.785 18:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:26.785 ************************************ 00:03:26.785 END TEST no_shrink_alloc 00:03:26.785 ************************************ 00:03:26.785 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.044 18:01:45 -- setup/hugepages.sh@217 -- # clear_hp 00:03:27.044 18:01:45 -- setup/hugepages.sh@37 -- # local node hp 00:03:27.044 18:01:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:27.044 18:01:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.044 18:01:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.044 18:01:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.044 18:01:45 -- setup/hugepages.sh@41 -- # echo 0 00:03:27.044 18:01:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:27.044 18:01:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:27.044 ************************************ 00:03:27.044 END TEST hugepages 00:03:27.044 ************************************ 00:03:27.044 00:03:27.044 real 0m4.684s 00:03:27.044 user 0m2.282s 00:03:27.044 sys 0m2.466s 00:03:27.044 18:01:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:27.044 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.044 18:01:45 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:27.044 18:01:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.044 18:01:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.044 18:01:45 -- common/autotest_common.sh@10 -- # set +x 00:03:27.044 ************************************ 00:03:27.044 START TEST driver 00:03:27.044 ************************************ 00:03:27.044 18:01:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:27.044 * Looking for test storage... 00:03:27.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.044 18:01:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:27.044 18:01:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:27.044 18:01:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:27.044 18:01:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:27.044 18:01:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:27.044 18:01:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:27.044 18:01:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:27.044 18:01:45 -- scripts/common.sh@335 -- # IFS=.-: 00:03:27.044 18:01:45 -- scripts/common.sh@335 -- # read -ra ver1 00:03:27.044 18:01:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:27.044 18:01:45 -- scripts/common.sh@336 -- # read -ra ver2 00:03:27.044 18:01:45 -- scripts/common.sh@337 -- # local 'op=<' 00:03:27.044 18:01:45 -- scripts/common.sh@339 -- # ver1_l=2 00:03:27.044 18:01:45 -- scripts/common.sh@340 -- # ver2_l=1 00:03:27.044 18:01:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:27.044 18:01:45 -- scripts/common.sh@343 -- # case "$op" in 00:03:27.044 18:01:45 -- scripts/common.sh@344 -- # : 1 00:03:27.044 18:01:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:27.044 18:01:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:27.044 18:01:45 -- scripts/common.sh@364 -- # decimal 1 00:03:27.044 18:01:45 -- scripts/common.sh@352 -- # local d=1 00:03:27.044 18:01:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:27.044 18:01:45 -- scripts/common.sh@354 -- # echo 1 00:03:27.044 18:01:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:27.044 18:01:45 -- scripts/common.sh@365 -- # decimal 2 00:03:27.044 18:01:45 -- scripts/common.sh@352 -- # local d=2 00:03:27.044 18:01:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:27.044 18:01:45 -- scripts/common.sh@354 -- # echo 2 00:03:27.044 18:01:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:27.044 18:01:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:27.044 18:01:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:27.044 18:01:45 -- scripts/common.sh@367 -- # return 0 00:03:27.044 18:01:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:27.044 18:01:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.044 --rc genhtml_branch_coverage=1 00:03:27.044 --rc genhtml_function_coverage=1 00:03:27.044 --rc genhtml_legend=1 00:03:27.044 --rc geninfo_all_blocks=1 00:03:27.044 --rc geninfo_unexecuted_blocks=1 00:03:27.044 00:03:27.044 ' 00:03:27.044 18:01:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:27.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.045 --rc genhtml_branch_coverage=1 00:03:27.045 --rc genhtml_function_coverage=1 00:03:27.045 --rc genhtml_legend=1 00:03:27.045 --rc geninfo_all_blocks=1 00:03:27.045 --rc geninfo_unexecuted_blocks=1 00:03:27.045 00:03:27.045 ' 00:03:27.045 18:01:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:27.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.045 --rc genhtml_branch_coverage=1 00:03:27.045 --rc genhtml_function_coverage=1 00:03:27.045 --rc genhtml_legend=1 00:03:27.045 --rc geninfo_all_blocks=1 00:03:27.045 --rc geninfo_unexecuted_blocks=1 00:03:27.045 00:03:27.045 ' 00:03:27.045 18:01:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:27.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:27.045 --rc genhtml_branch_coverage=1 00:03:27.045 --rc genhtml_function_coverage=1 00:03:27.045 --rc genhtml_legend=1 00:03:27.045 --rc geninfo_all_blocks=1 00:03:27.045 --rc geninfo_unexecuted_blocks=1 00:03:27.045 00:03:27.045 ' 00:03:27.045 18:01:45 -- setup/driver.sh@68 -- # setup reset 00:03:27.045 18:01:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.045 18:01:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.612 18:01:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:27.612 18:01:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.612 18:01:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.612 18:01:46 -- common/autotest_common.sh@10 -- # set +x 00:03:27.612 ************************************ 00:03:27.612 START TEST guess_driver 00:03:27.612 ************************************ 00:03:27.612 18:01:46 -- common/autotest_common.sh@1114 -- # guess_driver 00:03:27.612 18:01:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:27.612 18:01:46 -- setup/driver.sh@47 -- # local fail=0 00:03:27.612 18:01:46 -- setup/driver.sh@49 -- # pick_driver 00:03:27.612 18:01:46 -- setup/driver.sh@36 -- # vfio 00:03:27.612 18:01:46 -- setup/driver.sh@21 -- # local iommu_grups 00:03:27.612 18:01:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:27.612 18:01:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:27.612 18:01:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:27.612 18:01:46 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:27.612 18:01:46 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:27.612 18:01:46 -- setup/driver.sh@32 -- # return 1 00:03:27.612 18:01:46 -- setup/driver.sh@38 -- # uio 00:03:27.612 18:01:46 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:27.612 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:27.612 18:01:46 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:27.612 Looking for driver=uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:27.612 18:01:46 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:27.612 18:01:46 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:27.612 18:01:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.612 18:01:46 -- setup/driver.sh@45 -- # setup output config 00:03:27.612 18:01:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.612 18:01:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:28.550 18:01:46 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:28.550 18:01:46 -- setup/driver.sh@58 -- # continue 00:03:28.550 18:01:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.550 18:01:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.550 18:01:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:28.550 18:01:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.550 18:01:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.550 18:01:46 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:28.550 18:01:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.550 18:01:47 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:28.550 18:01:47 -- setup/driver.sh@65 -- # setup reset 00:03:28.550 18:01:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.550 18:01:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.118 ************************************ 00:03:29.118 END TEST guess_driver 00:03:29.118 ************************************ 00:03:29.118 00:03:29.118 real 0m1.407s 00:03:29.118 user 0m0.542s 00:03:29.118 sys 0m0.856s 00:03:29.118 18:01:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.118 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 ************************************ 00:03:29.118 END TEST driver 00:03:29.118 ************************************ 00:03:29.118 00:03:29.118 real 0m2.183s 00:03:29.118 user 0m0.881s 00:03:29.118 sys 0m1.362s 00:03:29.118 18:01:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:29.118 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 18:01:47 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:29.118 18:01:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.118 18:01:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.118 18:01:47 -- common/autotest_common.sh@10 -- # set +x 00:03:29.118 ************************************ 00:03:29.118 START TEST devices 00:03:29.118 ************************************ 00:03:29.118 18:01:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:29.378 * Looking for test storage... 00:03:29.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.378 18:01:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:29.378 18:01:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:29.378 18:01:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:29.378 18:01:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:29.378 18:01:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:29.378 18:01:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:29.378 18:01:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:29.378 18:01:47 -- scripts/common.sh@335 -- # IFS=.-: 00:03:29.378 18:01:47 -- scripts/common.sh@335 -- # read -ra ver1 00:03:29.378 18:01:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.378 18:01:47 -- scripts/common.sh@336 -- # read -ra ver2 00:03:29.378 18:01:47 -- scripts/common.sh@337 -- # local 'op=<' 00:03:29.378 18:01:47 -- scripts/common.sh@339 -- # ver1_l=2 00:03:29.378 18:01:47 -- scripts/common.sh@340 -- # ver2_l=1 00:03:29.378 18:01:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:29.378 18:01:47 -- scripts/common.sh@343 -- # case "$op" in 00:03:29.378 18:01:47 -- scripts/common.sh@344 -- # : 1 00:03:29.378 18:01:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:29.378 18:01:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.378 18:01:47 -- scripts/common.sh@364 -- # decimal 1 00:03:29.378 18:01:47 -- scripts/common.sh@352 -- # local d=1 00:03:29.378 18:01:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.378 18:01:47 -- scripts/common.sh@354 -- # echo 1 00:03:29.378 18:01:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:29.378 18:01:47 -- scripts/common.sh@365 -- # decimal 2 00:03:29.378 18:01:47 -- scripts/common.sh@352 -- # local d=2 00:03:29.378 18:01:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.378 18:01:47 -- scripts/common.sh@354 -- # echo 2 00:03:29.378 18:01:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:29.378 18:01:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:29.378 18:01:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:29.378 18:01:47 -- scripts/common.sh@367 -- # return 0 00:03:29.378 18:01:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.378 18:01:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.378 --rc genhtml_branch_coverage=1 00:03:29.378 --rc genhtml_function_coverage=1 00:03:29.378 --rc genhtml_legend=1 00:03:29.378 --rc geninfo_all_blocks=1 00:03:29.378 --rc geninfo_unexecuted_blocks=1 00:03:29.378 00:03:29.378 ' 00:03:29.378 18:01:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.378 --rc genhtml_branch_coverage=1 00:03:29.378 --rc genhtml_function_coverage=1 00:03:29.378 --rc genhtml_legend=1 00:03:29.378 --rc geninfo_all_blocks=1 00:03:29.378 --rc geninfo_unexecuted_blocks=1 00:03:29.378 00:03:29.378 ' 00:03:29.378 18:01:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.378 --rc genhtml_branch_coverage=1 00:03:29.378 --rc genhtml_function_coverage=1 00:03:29.378 --rc genhtml_legend=1 00:03:29.378 --rc geninfo_all_blocks=1 00:03:29.378 --rc geninfo_unexecuted_blocks=1 00:03:29.378 00:03:29.378 ' 00:03:29.378 18:01:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:29.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.378 --rc genhtml_branch_coverage=1 00:03:29.378 --rc genhtml_function_coverage=1 00:03:29.378 --rc genhtml_legend=1 00:03:29.378 --rc geninfo_all_blocks=1 00:03:29.378 --rc geninfo_unexecuted_blocks=1 00:03:29.378 00:03:29.378 ' 00:03:29.378 18:01:47 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:29.378 18:01:47 -- setup/devices.sh@192 -- # setup reset 00:03:29.378 18:01:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.378 18:01:47 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.316 18:01:48 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:30.316 18:01:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:30.316 18:01:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:30.316 18:01:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:30.316 18:01:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:30.316 18:01:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:30.316 18:01:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:30.316 18:01:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:30.316 18:01:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:30.316 18:01:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:30.316 18:01:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:30.316 18:01:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:30.316 18:01:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:30.316 18:01:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:30.316 18:01:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:30.316 18:01:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:30.316 18:01:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:30.316 18:01:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:30.316 18:01:48 -- setup/devices.sh@196 -- # blocks=() 00:03:30.316 18:01:48 -- setup/devices.sh@196 -- # declare -a blocks 00:03:30.316 18:01:48 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:30.316 18:01:48 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:30.316 18:01:48 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:30.316 18:01:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.316 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:30.316 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:30.316 18:01:48 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:30.316 18:01:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:30.316 18:01:48 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:30.316 18:01:48 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:30.316 18:01:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:30.316 No valid GPT data, bailing 00:03:30.316 18:01:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:30.316 18:01:48 -- scripts/common.sh@393 -- # pt= 00:03:30.316 18:01:48 -- scripts/common.sh@394 -- # return 1 00:03:30.316 18:01:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:30.316 18:01:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:30.316 18:01:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:30.316 18:01:48 -- setup/common.sh@80 -- # echo 5368709120 00:03:30.316 18:01:48 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:30.316 18:01:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.316 18:01:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:30.317 18:01:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:30.317 18:01:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:30.317 18:01:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:30.317 18:01:48 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:03:30.317 18:01:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:30.317 No valid GPT data, bailing 00:03:30.317 18:01:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:30.317 18:01:48 -- scripts/common.sh@393 -- # pt= 00:03:30.317 18:01:48 -- scripts/common.sh@394 -- # return 1 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:30.317 18:01:48 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:30.317 18:01:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:30.317 18:01:48 -- setup/common.sh@80 -- # echo 4294967296 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:30.317 18:01:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.317 18:01:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:30.317 18:01:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:30.317 18:01:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:30.317 18:01:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:03:30.317 18:01:48 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:03:30.317 18:01:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:03:30.317 No valid GPT data, bailing 00:03:30.317 18:01:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:30.317 18:01:48 -- scripts/common.sh@393 -- # pt= 00:03:30.317 18:01:48 -- scripts/common.sh@394 -- # return 1 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:03:30.317 18:01:48 -- setup/common.sh@76 -- # local dev=nvme1n2 00:03:30.317 18:01:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:03:30.317 18:01:48 -- setup/common.sh@80 -- # echo 4294967296 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:30.317 18:01:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.317 18:01:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:30.317 18:01:48 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:03:30.317 18:01:48 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:30.317 18:01:48 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:03:30.317 18:01:48 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:30.317 18:01:48 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:03:30.317 18:01:48 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:03:30.317 18:01:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:03:30.317 No valid GPT data, bailing 00:03:30.317 18:01:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:30.576 18:01:48 -- scripts/common.sh@393 -- # pt= 00:03:30.576 18:01:48 -- scripts/common.sh@394 -- # return 1 00:03:30.576 18:01:48 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:03:30.576 18:01:48 -- setup/common.sh@76 -- # local dev=nvme1n3 00:03:30.576 18:01:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:03:30.576 18:01:48 -- setup/common.sh@80 -- # echo 4294967296 00:03:30.576 18:01:48 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:30.576 18:01:48 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:30.576 18:01:48 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:03:30.576 18:01:48 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:30.576 18:01:48 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:30.576 18:01:48 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:30.576 18:01:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.576 18:01:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.576 18:01:48 -- common/autotest_common.sh@10 -- # set +x 00:03:30.576 ************************************ 00:03:30.576 START TEST nvme_mount 00:03:30.576 ************************************ 00:03:30.576 18:01:48 -- common/autotest_common.sh@1114 -- # nvme_mount 00:03:30.576 18:01:48 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:30.576 18:01:48 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:30.576 18:01:48 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:30.576 18:01:48 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:30.576 18:01:48 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:30.576 18:01:48 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:30.576 18:01:48 -- setup/common.sh@40 -- # local part_no=1 00:03:30.576 18:01:48 -- setup/common.sh@41 -- # local size=1073741824 00:03:30.576 18:01:48 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:30.576 18:01:48 -- setup/common.sh@44 -- # parts=() 00:03:30.576 18:01:48 -- setup/common.sh@44 -- # local parts 00:03:30.576 18:01:48 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:30.576 18:01:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.576 18:01:48 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:30.576 18:01:48 -- setup/common.sh@46 -- # (( part++ )) 00:03:30.576 18:01:48 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:30.576 18:01:48 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:30.576 18:01:48 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:30.576 18:01:48 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:31.512 Creating new GPT entries in memory. 00:03:31.512 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:31.512 other utilities. 00:03:31.512 18:01:49 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:31.512 18:01:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:31.512 18:01:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:31.512 18:01:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:31.512 18:01:49 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:32.450 Creating new GPT entries in memory. 00:03:32.450 The operation has completed successfully. 00:03:32.450 18:01:50 -- setup/common.sh@57 -- # (( part++ )) 00:03:32.450 18:01:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.450 18:01:50 -- setup/common.sh@62 -- # wait 52109 00:03:32.450 18:01:51 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.450 18:01:51 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:32.450 18:01:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.450 18:01:51 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:32.450 18:01:51 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:32.450 18:01:51 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.709 18:01:51 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.709 18:01:51 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:32.709 18:01:51 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:32.709 18:01:51 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:32.709 18:01:51 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:32.709 18:01:51 -- setup/devices.sh@53 -- # local found=0 00:03:32.709 18:01:51 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.709 18:01:51 -- setup/devices.sh@56 -- # : 00:03:32.709 18:01:51 -- setup/devices.sh@59 -- # local pci status 00:03:32.709 18:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.709 18:01:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:32.709 18:01:51 -- setup/devices.sh@47 -- # setup output config 00:03:32.709 18:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.709 18:01:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.709 18:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.709 18:01:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:32.709 18:01:51 -- setup/devices.sh@63 -- # found=1 00:03:32.709 18:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.709 18:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.709 18:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.969 18:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:32.969 18:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.228 18:01:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.228 18:01:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.228 18:01:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.228 18:01:51 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:33.228 18:01:51 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.228 18:01:51 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.228 18:01:51 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.228 18:01:51 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.228 18:01:51 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.228 18:01:51 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.228 18:01:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.228 18:01:51 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.228 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.228 18:01:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.228 18:01:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.487 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:33.487 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:33.487 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.487 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.487 18:01:51 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:33.487 18:01:51 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:33.487 18:01:51 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.487 18:01:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:33.487 18:01:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:33.487 18:01:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.487 18:01:52 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.487 18:01:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:33.487 18:01:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:33.487 18:01:52 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.487 18:01:52 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.487 18:01:52 -- setup/devices.sh@53 -- # local found=0 00:03:33.487 18:01:52 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.487 18:01:52 -- setup/devices.sh@56 -- # : 00:03:33.487 18:01:52 -- setup/devices.sh@59 -- # local pci status 00:03:33.487 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.487 18:01:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:33.487 18:01:52 -- setup/devices.sh@47 -- # setup output config 00:03:33.487 18:01:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.487 18:01:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:33.747 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.747 18:01:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:33.747 18:01:52 -- setup/devices.sh@63 -- # found=1 00:03:33.747 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.747 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.747 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.006 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.006 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.006 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.006 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.265 18:01:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.265 18:01:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:34.265 18:01:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.265 18:01:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.265 18:01:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:34.265 18:01:52 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.265 18:01:52 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:34.265 18:01:52 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:34.265 18:01:52 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:34.265 18:01:52 -- setup/devices.sh@50 -- # local mount_point= 00:03:34.265 18:01:52 -- setup/devices.sh@51 -- # local test_file= 00:03:34.265 18:01:52 -- setup/devices.sh@53 -- # local found=0 00:03:34.265 18:01:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:34.265 18:01:52 -- setup/devices.sh@59 -- # local pci status 00:03:34.265 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.265 18:01:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:34.265 18:01:52 -- setup/devices.sh@47 -- # setup output config 00:03:34.265 18:01:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.265 18:01:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:34.524 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.524 18:01:52 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:34.524 18:01:52 -- setup/devices.sh@63 -- # found=1 00:03:34.524 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.524 18:01:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.524 18:01:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.783 18:01:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.783 18:01:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.783 18:01:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.783 18:01:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.783 18:01:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.783 18:01:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:34.783 18:01:53 -- setup/devices.sh@68 -- # return 0 00:03:34.783 18:01:53 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:34.783 18:01:53 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.783 18:01:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.783 18:01:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.783 18:01:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.783 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.783 00:03:34.783 real 0m4.452s 00:03:34.783 user 0m0.965s 00:03:34.783 sys 0m1.156s 00:03:34.783 18:01:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:34.783 ************************************ 00:03:34.783 END TEST nvme_mount 00:03:34.783 ************************************ 00:03:34.783 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:35.042 18:01:53 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:35.042 18:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:35.042 18:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:35.042 18:01:53 -- common/autotest_common.sh@10 -- # set +x 00:03:35.042 ************************************ 00:03:35.042 START TEST dm_mount 00:03:35.042 ************************************ 00:03:35.042 18:01:53 -- common/autotest_common.sh@1114 -- # dm_mount 00:03:35.042 18:01:53 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:35.042 18:01:53 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:35.042 18:01:53 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:35.042 18:01:53 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:35.043 18:01:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:35.043 18:01:53 -- setup/common.sh@40 -- # local part_no=2 00:03:35.043 18:01:53 -- setup/common.sh@41 -- # local size=1073741824 00:03:35.043 18:01:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:35.043 18:01:53 -- setup/common.sh@44 -- # parts=() 00:03:35.043 18:01:53 -- setup/common.sh@44 -- # local parts 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:35.043 18:01:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part++ )) 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:35.043 18:01:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part++ )) 00:03:35.043 18:01:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:35.043 18:01:53 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:35.043 18:01:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:35.043 18:01:53 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:35.980 Creating new GPT entries in memory. 00:03:35.980 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.980 other utilities. 00:03:35.980 18:01:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.980 18:01:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.981 18:01:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.981 18:01:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.981 18:01:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:36.916 Creating new GPT entries in memory. 00:03:36.916 The operation has completed successfully. 00:03:36.916 18:01:55 -- setup/common.sh@57 -- # (( part++ )) 00:03:36.916 18:01:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.916 18:01:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:36.916 18:01:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:36.916 18:01:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:38.294 The operation has completed successfully. 00:03:38.294 18:01:56 -- setup/common.sh@57 -- # (( part++ )) 00:03:38.294 18:01:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.294 18:01:56 -- setup/common.sh@62 -- # wait 52569 00:03:38.294 18:01:56 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:38.294 18:01:56 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.294 18:01:56 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:38.294 18:01:56 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:38.294 18:01:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:38.294 18:01:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:38.294 18:01:56 -- setup/devices.sh@161 -- # break 00:03:38.294 18:01:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:38.294 18:01:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:38.294 18:01:56 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:38.294 18:01:56 -- setup/devices.sh@166 -- # dm=dm-0 00:03:38.294 18:01:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:38.294 18:01:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:38.294 18:01:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.294 18:01:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:38.294 18:01:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.294 18:01:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:38.294 18:01:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:38.294 18:01:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.294 18:01:56 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:38.294 18:01:56 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:38.294 18:01:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:38.294 18:01:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.294 18:01:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:38.294 18:01:56 -- setup/devices.sh@53 -- # local found=0 00:03:38.294 18:01:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.294 18:01:56 -- setup/devices.sh@56 -- # : 00:03:38.294 18:01:56 -- setup/devices.sh@59 -- # local pci status 00:03:38.294 18:01:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:38.294 18:01:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.294 18:01:56 -- setup/devices.sh@47 -- # setup output config 00:03:38.294 18:01:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.294 18:01:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.294 18:01:56 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.294 18:01:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:38.294 18:01:56 -- setup/devices.sh@63 -- # found=1 00:03:38.294 18:01:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.294 18:01:56 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.294 18:01:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.554 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.554 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.814 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.814 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.814 18:01:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.814 18:01:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:38.814 18:01:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.814 18:01:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.814 18:01:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:38.814 18:01:57 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.814 18:01:57 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:38.814 18:01:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:38.814 18:01:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:38.814 18:01:57 -- setup/devices.sh@50 -- # local mount_point= 00:03:38.814 18:01:57 -- setup/devices.sh@51 -- # local test_file= 00:03:38.814 18:01:57 -- setup/devices.sh@53 -- # local found=0 00:03:38.814 18:01:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.814 18:01:57 -- setup/devices.sh@59 -- # local pci status 00:03:38.814 18:01:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:38.814 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.814 18:01:57 -- setup/devices.sh@47 -- # setup output config 00:03:38.814 18:01:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.814 18:01:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.073 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:39.073 18:01:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:39.073 18:01:57 -- setup/devices.sh@63 -- # found=1 00:03:39.073 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.073 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:39.073 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.332 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:39.332 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.332 18:01:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:39.332 18:01:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.332 18:01:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.332 18:01:57 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:39.332 18:01:57 -- setup/devices.sh@68 -- # return 0 00:03:39.332 18:01:57 -- setup/devices.sh@187 -- # cleanup_dm 00:03:39.332 18:01:57 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:39.332 18:01:57 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:39.332 18:01:57 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:39.591 18:01:57 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.591 18:01:57 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:39.591 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.591 18:01:57 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:39.591 18:01:57 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:39.591 00:03:39.591 real 0m4.531s 00:03:39.591 user 0m0.704s 00:03:39.591 sys 0m0.754s 00:03:39.591 ************************************ 00:03:39.591 END TEST dm_mount 00:03:39.591 ************************************ 00:03:39.591 18:01:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.591 18:01:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.591 18:01:58 -- setup/devices.sh@1 -- # cleanup 00:03:39.591 18:01:58 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:39.591 18:01:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.591 18:01:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.591 18:01:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.591 18:01:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.591 18:01:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.851 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:39.851 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:39.851 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:39.851 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:39.851 18:01:58 -- setup/devices.sh@12 -- # cleanup_dm 00:03:39.851 18:01:58 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:39.851 18:01:58 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:39.851 18:01:58 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.851 18:01:58 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:39.851 18:01:58 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.851 18:01:58 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:39.851 00:03:39.851 real 0m10.622s 00:03:39.851 user 0m2.432s 00:03:39.851 sys 0m2.496s 00:03:39.851 18:01:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.851 18:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:39.851 ************************************ 00:03:39.851 END TEST devices 00:03:39.851 ************************************ 00:03:39.851 00:03:39.851 real 0m22.108s 00:03:39.851 user 0m7.651s 00:03:39.851 sys 0m8.885s 00:03:39.851 18:01:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:39.851 18:01:58 -- common/autotest_common.sh@10 -- # set +x 00:03:39.851 ************************************ 00:03:39.851 END TEST setup.sh 00:03:39.851 ************************************ 00:03:39.851 18:01:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.110 Hugepages 00:03:40.110 node hugesize free / total 00:03:40.110 node0 1048576kB 0 / 0 00:03:40.110 node0 2048kB 2048 / 2048 00:03:40.110 00:03:40.110 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.110 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:40.110 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:40.370 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:40.370 18:01:58 -- spdk/autotest.sh@128 -- # uname -s 00:03:40.370 18:01:58 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:03:40.370 18:01:58 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:03:40.370 18:01:58 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:40.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.938 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:40.938 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.197 18:01:59 -- common/autotest_common.sh@1527 -- # sleep 1 00:03:42.134 18:02:00 -- common/autotest_common.sh@1528 -- # bdfs=() 00:03:42.134 18:02:00 -- common/autotest_common.sh@1528 -- # local bdfs 00:03:42.134 18:02:00 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.134 18:02:00 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:03:42.134 18:02:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:42.134 18:02:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:42.134 18:02:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.134 18:02:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:42.134 18:02:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:42.134 18:02:00 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:42.134 18:02:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:42.134 18:02:00 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:42.394 Waiting for block devices as requested 00:03:42.653 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:42.653 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:03:42.653 18:02:01 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:42.653 18:02:01 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:42.653 18:02:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:42.653 18:02:01 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:42.653 18:02:01 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:42.653 18:02:01 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:42.653 18:02:01 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:42.653 18:02:01 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:42.653 18:02:01 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1552 -- # continue 00:03:42.653 18:02:01 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:03:42.653 18:02:01 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:03:42.653 18:02:01 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:03:42.653 18:02:01 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:03:42.653 18:02:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:42.654 18:02:01 -- common/autotest_common.sh@1540 -- # grep oacs 00:03:42.654 18:02:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:42.654 18:02:01 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:03:42.654 18:02:01 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:03:42.654 18:02:01 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:03:42.654 18:02:01 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:03:42.654 18:02:01 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:03:42.654 18:02:01 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:03:42.654 18:02:01 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:03:42.654 18:02:01 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:03:42.654 18:02:01 -- common/autotest_common.sh@1552 -- # continue 00:03:42.654 18:02:01 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:03:42.654 18:02:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:42.654 18:02:01 -- common/autotest_common.sh@10 -- # set +x 00:03:42.912 18:02:01 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:03:42.912 18:02:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.912 18:02:01 -- common/autotest_common.sh@10 -- # set +x 00:03:42.912 18:02:01 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.481 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.481 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.741 18:02:02 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:03:43.741 18:02:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:43.741 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:43.741 18:02:02 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:03:43.741 18:02:02 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:03:43.741 18:02:02 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:03:43.741 18:02:02 -- common/autotest_common.sh@1572 -- # bdfs=() 00:03:43.741 18:02:02 -- common/autotest_common.sh@1572 -- # local bdfs 00:03:43.741 18:02:02 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:03:43.741 18:02:02 -- common/autotest_common.sh@1508 -- # bdfs=() 00:03:43.741 18:02:02 -- common/autotest_common.sh@1508 -- # local bdfs 00:03:43.741 18:02:02 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.741 18:02:02 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:43.741 18:02:02 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:03:43.741 18:02:02 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:03:43.741 18:02:02 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:03:43.741 18:02:02 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:43.741 18:02:02 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:43.741 18:02:02 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:43.741 18:02:02 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:43.741 18:02:02 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:03:43.741 18:02:02 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:03:43.741 18:02:02 -- common/autotest_common.sh@1575 -- # device=0x0010 00:03:43.741 18:02:02 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:43.741 18:02:02 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:03:43.741 18:02:02 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:03:43.741 18:02:02 -- common/autotest_common.sh@1588 -- # return 0 00:03:43.741 18:02:02 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:03:43.741 18:02:02 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:03:43.741 18:02:02 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:43.741 18:02:02 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:03:43.741 18:02:02 -- spdk/autotest.sh@160 -- # timing_enter lib 00:03:43.741 18:02:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.741 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:43.741 18:02:02 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:43.741 18:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.741 18:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.741 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:43.741 ************************************ 00:03:43.741 START TEST env 00:03:43.741 ************************************ 00:03:43.741 18:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:43.741 * Looking for test storage... 00:03:43.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:43.741 18:02:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:43.741 18:02:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:43.741 18:02:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:44.001 18:02:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:44.001 18:02:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:44.001 18:02:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:44.001 18:02:02 -- scripts/common.sh@335 -- # IFS=.-: 00:03:44.001 18:02:02 -- scripts/common.sh@335 -- # read -ra ver1 00:03:44.001 18:02:02 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.001 18:02:02 -- scripts/common.sh@336 -- # read -ra ver2 00:03:44.001 18:02:02 -- scripts/common.sh@337 -- # local 'op=<' 00:03:44.001 18:02:02 -- scripts/common.sh@339 -- # ver1_l=2 00:03:44.001 18:02:02 -- scripts/common.sh@340 -- # ver2_l=1 00:03:44.001 18:02:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:44.001 18:02:02 -- scripts/common.sh@343 -- # case "$op" in 00:03:44.001 18:02:02 -- scripts/common.sh@344 -- # : 1 00:03:44.001 18:02:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:44.001 18:02:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.001 18:02:02 -- scripts/common.sh@364 -- # decimal 1 00:03:44.001 18:02:02 -- scripts/common.sh@352 -- # local d=1 00:03:44.001 18:02:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.001 18:02:02 -- scripts/common.sh@354 -- # echo 1 00:03:44.001 18:02:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:44.001 18:02:02 -- scripts/common.sh@365 -- # decimal 2 00:03:44.001 18:02:02 -- scripts/common.sh@352 -- # local d=2 00:03:44.001 18:02:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.001 18:02:02 -- scripts/common.sh@354 -- # echo 2 00:03:44.001 18:02:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:44.001 18:02:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:44.001 18:02:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:44.001 18:02:02 -- scripts/common.sh@367 -- # return 0 00:03:44.001 18:02:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:44.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.001 --rc genhtml_branch_coverage=1 00:03:44.001 --rc genhtml_function_coverage=1 00:03:44.001 --rc genhtml_legend=1 00:03:44.001 --rc geninfo_all_blocks=1 00:03:44.001 --rc geninfo_unexecuted_blocks=1 00:03:44.001 00:03:44.001 ' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:44.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.001 --rc genhtml_branch_coverage=1 00:03:44.001 --rc genhtml_function_coverage=1 00:03:44.001 --rc genhtml_legend=1 00:03:44.001 --rc geninfo_all_blocks=1 00:03:44.001 --rc geninfo_unexecuted_blocks=1 00:03:44.001 00:03:44.001 ' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:44.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.001 --rc genhtml_branch_coverage=1 00:03:44.001 --rc genhtml_function_coverage=1 00:03:44.001 --rc genhtml_legend=1 00:03:44.001 --rc geninfo_all_blocks=1 00:03:44.001 --rc geninfo_unexecuted_blocks=1 00:03:44.001 00:03:44.001 ' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:44.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.001 --rc genhtml_branch_coverage=1 00:03:44.001 --rc genhtml_function_coverage=1 00:03:44.001 --rc genhtml_legend=1 00:03:44.001 --rc geninfo_all_blocks=1 00:03:44.001 --rc geninfo_unexecuted_blocks=1 00:03:44.001 00:03:44.001 ' 00:03:44.001 18:02:02 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:44.001 18:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.001 18:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.001 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.001 ************************************ 00:03:44.001 START TEST env_memory 00:03:44.001 ************************************ 00:03:44.001 18:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:44.001 00:03:44.001 00:03:44.001 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.001 http://cunit.sourceforge.net/ 00:03:44.001 00:03:44.001 00:03:44.001 Suite: memory 00:03:44.001 Test: alloc and free memory map ...[2024-11-18 18:02:02.495667] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.001 passed 00:03:44.001 Test: mem map translation ...[2024-11-18 18:02:02.526696] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.001 [2024-11-18 18:02:02.526730] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.002 [2024-11-18 18:02:02.526785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.002 [2024-11-18 18:02:02.526796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.002 passed 00:03:44.002 Test: mem map registration ...[2024-11-18 18:02:02.590630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:44.002 [2024-11-18 18:02:02.590656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:44.262 passed 00:03:44.262 Test: mem map adjacent registrations ...passed 00:03:44.262 00:03:44.262 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.262 suites 1 1 n/a 0 0 00:03:44.262 tests 4 4 4 0 0 00:03:44.262 asserts 152 152 152 0 n/a 00:03:44.262 00:03:44.262 Elapsed time = 0.214 seconds 00:03:44.262 00:03:44.262 real 0m0.233s 00:03:44.262 user 0m0.215s 00:03:44.262 sys 0m0.013s 00:03:44.262 18:02:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.262 ************************************ 00:03:44.262 END TEST env_memory 00:03:44.262 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.262 ************************************ 00:03:44.262 18:02:02 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.262 18:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.262 18:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.262 18:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.262 ************************************ 00:03:44.262 START TEST env_vtophys 00:03:44.262 ************************************ 00:03:44.262 18:02:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.262 EAL: lib.eal log level changed from notice to debug 00:03:44.262 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 1 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 2 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 3 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 4 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 5 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 6 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 7 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 8 as core 0 on socket 0 00:03:44.262 EAL: Detected lcore 9 as core 0 on socket 0 00:03:44.262 EAL: Maximum logical cores by configuration: 128 00:03:44.262 EAL: Detected CPU lcores: 10 00:03:44.262 EAL: Detected NUMA nodes: 1 00:03:44.262 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:44.262 EAL: Detected shared linkage of DPDK 00:03:44.262 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.262 EAL: Selected IOVA mode 'PA' 00:03:44.262 EAL: Probing VFIO support... 00:03:44.262 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:44.262 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:44.262 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.262 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.262 EAL: Setting up physically contiguous memory... 00:03:44.262 EAL: Setting maximum number of open files to 524288 00:03:44.262 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.262 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.262 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.262 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.262 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.262 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.262 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.262 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.262 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.262 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.262 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.262 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.262 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.262 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.262 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.262 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.262 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.262 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.262 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.262 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.262 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.262 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.262 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.262 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.262 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.262 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.262 EAL: Hugepages will be freed exactly as allocated. 00:03:44.262 EAL: No shared files mode enabled, IPC is disabled 00:03:44.262 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: TSC frequency is ~2200000 KHz 00:03:44.522 EAL: Main lcore 0 is ready (tid=7f45d35f5a00;cpuset=[0]) 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 0 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.522 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:44.522 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.522 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.522 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:44.522 00:03:44.522 00:03:44.522 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.522 http://cunit.sourceforge.net/ 00:03:44.522 00:03:44.522 00:03:44.522 Suite: components_suite 00:03:44.522 Test: vtophys_malloc_test ...passed 00:03:44.522 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.522 EAL: Restoring previous memory policy: 4 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.522 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.522 EAL: request: mp_malloc_sync 00:03:44.522 EAL: No shared files mode enabled, IPC is disabled 00:03:44.522 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.522 EAL: Trying to obtain current memory policy. 00:03:44.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.523 EAL: Restoring previous memory policy: 4 00:03:44.523 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.523 EAL: request: mp_malloc_sync 00:03:44.523 EAL: No shared files mode enabled, IPC is disabled 00:03:44.523 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.523 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.523 EAL: request: mp_malloc_sync 00:03:44.523 EAL: No shared files mode enabled, IPC is disabled 00:03:44.523 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.523 EAL: Trying to obtain current memory policy. 00:03:44.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.785 EAL: Restoring previous memory policy: 4 00:03:44.785 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.785 EAL: request: mp_malloc_sync 00:03:44.785 EAL: No shared files mode enabled, IPC is disabled 00:03:44.785 EAL: Heap on socket 0 was expanded by 514MB 00:03:44.785 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.785 EAL: request: mp_malloc_sync 00:03:44.785 EAL: No shared files mode enabled, IPC is disabled 00:03:44.785 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.785 EAL: Trying to obtain current memory policy. 00:03:44.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.785 EAL: Restoring previous memory policy: 4 00:03:44.785 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.785 EAL: request: mp_malloc_sync 00:03:44.785 EAL: No shared files mode enabled, IPC is disabled 00:03:44.785 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.043 passed 00:03:45.043 00:03:45.043 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.043 suites 1 1 n/a 0 0 00:03:45.043 tests 2 2 2 0 0 00:03:45.043 asserts 5337 5337 5337 0 n/a 00:03:45.043 00:03:45.043 Elapsed time = 0.674 seconds 00:03:45.043 EAL: request: mp_malloc_sync 00:03:45.043 EAL: No shared files mode enabled, IPC is disabled 00:03:45.043 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.043 EAL: request: mp_malloc_sync 00:03:45.043 EAL: No shared files mode enabled, IPC is disabled 00:03:45.043 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.043 EAL: No shared files mode enabled, IPC is disabled 00:03:45.043 EAL: No shared files mode enabled, IPC is disabled 00:03:45.043 EAL: No shared files mode enabled, IPC is disabled 00:03:45.044 00:03:45.044 real 0m0.868s 00:03:45.044 user 0m0.433s 00:03:45.044 sys 0m0.305s 00:03:45.044 18:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.044 ************************************ 00:03:45.044 END TEST env_vtophys 00:03:45.044 ************************************ 00:03:45.044 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.044 18:02:03 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.044 18:02:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.044 18:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.044 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.044 ************************************ 00:03:45.044 START TEST env_pci 00:03:45.044 ************************************ 00:03:45.044 18:02:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.303 00:03:45.303 00:03:45.303 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.303 http://cunit.sourceforge.net/ 00:03:45.303 00:03:45.303 00:03:45.303 Suite: pci 00:03:45.303 Test: pci_hook ...[2024-11-18 18:02:03.659880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53702 has claimed it 00:03:45.303 passed 00:03:45.303 00:03:45.303 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.303 suites 1 1 n/a 0 0 00:03:45.303 tests 1 1 1 0 0 00:03:45.303 asserts 25 25 25 0 n/a 00:03:45.303 00:03:45.303 Elapsed time = 0.002 seconds 00:03:45.303 EAL: Cannot find device (10000:00:01.0) 00:03:45.303 EAL: Failed to attach device on primary process 00:03:45.303 00:03:45.303 real 0m0.021s 00:03:45.303 user 0m0.009s 00:03:45.303 sys 0m0.012s 00:03:45.303 18:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.303 ************************************ 00:03:45.303 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.303 END TEST env_pci 00:03:45.303 ************************************ 00:03:45.303 18:02:03 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.303 18:02:03 -- env/env.sh@15 -- # uname 00:03:45.303 18:02:03 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.303 18:02:03 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.303 18:02:03 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.303 18:02:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:45.303 18:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.303 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.303 ************************************ 00:03:45.303 START TEST env_dpdk_post_init 00:03:45.303 ************************************ 00:03:45.303 18:02:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.303 EAL: Detected CPU lcores: 10 00:03:45.303 EAL: Detected NUMA nodes: 1 00:03:45.303 EAL: Detected shared linkage of DPDK 00:03:45.303 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.303 EAL: Selected IOVA mode 'PA' 00:03:45.303 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.303 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:45.303 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:03:45.303 Starting DPDK initialization... 00:03:45.303 Starting SPDK post initialization... 00:03:45.303 SPDK NVMe probe 00:03:45.303 Attaching to 0000:00:06.0 00:03:45.303 Attaching to 0000:00:07.0 00:03:45.303 Attached to 0000:00:06.0 00:03:45.303 Attached to 0000:00:07.0 00:03:45.303 Cleaning up... 00:03:45.303 00:03:45.303 real 0m0.175s 00:03:45.303 user 0m0.038s 00:03:45.303 sys 0m0.037s 00:03:45.303 18:02:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.303 ************************************ 00:03:45.303 END TEST env_dpdk_post_init 00:03:45.303 ************************************ 00:03:45.303 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.563 18:02:03 -- env/env.sh@26 -- # uname 00:03:45.563 18:02:03 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:45.563 18:02:03 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.563 18:02:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.563 18:02:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.563 18:02:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.563 ************************************ 00:03:45.563 START TEST env_mem_callbacks 00:03:45.563 ************************************ 00:03:45.563 18:02:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:45.563 EAL: Detected CPU lcores: 10 00:03:45.563 EAL: Detected NUMA nodes: 1 00:03:45.563 EAL: Detected shared linkage of DPDK 00:03:45.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.563 EAL: Selected IOVA mode 'PA' 00:03:45.563 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.563 00:03:45.563 00:03:45.563 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.563 http://cunit.sourceforge.net/ 00:03:45.563 00:03:45.563 00:03:45.563 Suite: memory 00:03:45.563 Test: test ... 00:03:45.563 register 0x200000200000 2097152 00:03:45.563 malloc 3145728 00:03:45.563 register 0x200000400000 4194304 00:03:45.563 buf 0x200000500000 len 3145728 PASSED 00:03:45.563 malloc 64 00:03:45.563 buf 0x2000004fff40 len 64 PASSED 00:03:45.563 malloc 4194304 00:03:45.563 register 0x200000800000 6291456 00:03:45.563 buf 0x200000a00000 len 4194304 PASSED 00:03:45.563 free 0x200000500000 3145728 00:03:45.563 free 0x2000004fff40 64 00:03:45.563 unregister 0x200000400000 4194304 PASSED 00:03:45.563 free 0x200000a00000 4194304 00:03:45.563 unregister 0x200000800000 6291456 PASSED 00:03:45.563 malloc 8388608 00:03:45.563 register 0x200000400000 10485760 00:03:45.563 buf 0x200000600000 len 8388608 PASSED 00:03:45.563 free 0x200000600000 8388608 00:03:45.563 unregister 0x200000400000 10485760 PASSED 00:03:45.563 passed 00:03:45.563 00:03:45.563 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.563 suites 1 1 n/a 0 0 00:03:45.563 tests 1 1 1 0 0 00:03:45.563 asserts 15 15 15 0 n/a 00:03:45.563 00:03:45.563 Elapsed time = 0.008 seconds 00:03:45.563 00:03:45.563 real 0m0.140s 00:03:45.563 user 0m0.017s 00:03:45.563 sys 0m0.022s 00:03:45.563 18:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.563 18:02:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.563 ************************************ 00:03:45.563 END TEST env_mem_callbacks 00:03:45.563 ************************************ 00:03:45.563 00:03:45.563 real 0m1.873s 00:03:45.563 user 0m0.919s 00:03:45.563 sys 0m0.611s 00:03:45.563 18:02:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:45.563 18:02:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.563 ************************************ 00:03:45.563 END TEST env 00:03:45.563 ************************************ 00:03:45.822 18:02:04 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.822 18:02:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.822 18:02:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.822 ************************************ 00:03:45.822 START TEST rpc 00:03:45.822 ************************************ 00:03:45.822 18:02:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.822 * Looking for test storage... 00:03:45.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:45.822 18:02:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:45.822 18:02:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:45.822 18:02:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:45.822 18:02:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:45.822 18:02:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:45.822 18:02:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:45.822 18:02:04 -- scripts/common.sh@335 -- # IFS=.-: 00:03:45.822 18:02:04 -- scripts/common.sh@335 -- # read -ra ver1 00:03:45.822 18:02:04 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.822 18:02:04 -- scripts/common.sh@336 -- # read -ra ver2 00:03:45.822 18:02:04 -- scripts/common.sh@337 -- # local 'op=<' 00:03:45.822 18:02:04 -- scripts/common.sh@339 -- # ver1_l=2 00:03:45.822 18:02:04 -- scripts/common.sh@340 -- # ver2_l=1 00:03:45.822 18:02:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:45.822 18:02:04 -- scripts/common.sh@343 -- # case "$op" in 00:03:45.822 18:02:04 -- scripts/common.sh@344 -- # : 1 00:03:45.822 18:02:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:45.822 18:02:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.822 18:02:04 -- scripts/common.sh@364 -- # decimal 1 00:03:45.822 18:02:04 -- scripts/common.sh@352 -- # local d=1 00:03:45.822 18:02:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.822 18:02:04 -- scripts/common.sh@354 -- # echo 1 00:03:45.822 18:02:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:45.822 18:02:04 -- scripts/common.sh@365 -- # decimal 2 00:03:45.822 18:02:04 -- scripts/common.sh@352 -- # local d=2 00:03:45.822 18:02:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.822 18:02:04 -- scripts/common.sh@354 -- # echo 2 00:03:45.822 18:02:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:45.822 18:02:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:45.822 18:02:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:45.822 18:02:04 -- scripts/common.sh@367 -- # return 0 00:03:45.822 18:02:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.822 --rc genhtml_branch_coverage=1 00:03:45.822 --rc genhtml_function_coverage=1 00:03:45.822 --rc genhtml_legend=1 00:03:45.822 --rc geninfo_all_blocks=1 00:03:45.822 --rc geninfo_unexecuted_blocks=1 00:03:45.822 00:03:45.822 ' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.822 --rc genhtml_branch_coverage=1 00:03:45.822 --rc genhtml_function_coverage=1 00:03:45.822 --rc genhtml_legend=1 00:03:45.822 --rc geninfo_all_blocks=1 00:03:45.822 --rc geninfo_unexecuted_blocks=1 00:03:45.822 00:03:45.822 ' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.822 --rc genhtml_branch_coverage=1 00:03:45.822 --rc genhtml_function_coverage=1 00:03:45.822 --rc genhtml_legend=1 00:03:45.822 --rc geninfo_all_blocks=1 00:03:45.822 --rc geninfo_unexecuted_blocks=1 00:03:45.822 00:03:45.822 ' 00:03:45.822 18:02:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:45.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.822 --rc genhtml_branch_coverage=1 00:03:45.822 --rc genhtml_function_coverage=1 00:03:45.822 --rc genhtml_legend=1 00:03:45.822 --rc geninfo_all_blocks=1 00:03:45.822 --rc geninfo_unexecuted_blocks=1 00:03:45.822 00:03:45.822 ' 00:03:45.822 18:02:04 -- rpc/rpc.sh@65 -- # spdk_pid=53824 00:03:45.822 18:02:04 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:45.822 18:02:04 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.822 18:02:04 -- rpc/rpc.sh@67 -- # waitforlisten 53824 00:03:45.822 18:02:04 -- common/autotest_common.sh@829 -- # '[' -z 53824 ']' 00:03:45.822 18:02:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.822 18:02:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:45.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.822 18:02:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.822 18:02:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:45.822 18:02:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.822 [2024-11-18 18:02:04.418063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:45.822 [2024-11-18 18:02:04.418195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53824 ] 00:03:46.082 [2024-11-18 18:02:04.560407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.082 [2024-11-18 18:02:04.628701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:46.082 [2024-11-18 18:02:04.628895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.082 [2024-11-18 18:02:04.628912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53824' to capture a snapshot of events at runtime. 00:03:46.082 [2024-11-18 18:02:04.628923] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53824 for offline analysis/debug. 00:03:46.082 [2024-11-18 18:02:04.628952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.020 18:02:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:47.020 18:02:05 -- common/autotest_common.sh@862 -- # return 0 00:03:47.020 18:02:05 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.020 18:02:05 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.020 18:02:05 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.020 18:02:05 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.020 18:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.020 18:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.020 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.020 ************************************ 00:03:47.020 START TEST rpc_integrity 00:03:47.020 ************************************ 00:03:47.020 18:02:05 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:47.020 18:02:05 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.020 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.020 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.020 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.020 18:02:05 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.020 18:02:05 -- rpc/rpc.sh@13 -- # jq length 00:03:47.020 18:02:05 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.020 18:02:05 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.020 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.020 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.020 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.020 18:02:05 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.020 18:02:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.020 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.020 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.020 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.020 18:02:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.020 { 00:03:47.020 "name": "Malloc0", 00:03:47.020 "aliases": [ 00:03:47.020 "fe8f20a6-a521-4299-87e6-d0be4f4ed356" 00:03:47.020 ], 00:03:47.020 "product_name": "Malloc disk", 00:03:47.020 "block_size": 512, 00:03:47.020 "num_blocks": 16384, 00:03:47.020 "uuid": "fe8f20a6-a521-4299-87e6-d0be4f4ed356", 00:03:47.020 "assigned_rate_limits": { 00:03:47.020 "rw_ios_per_sec": 0, 00:03:47.020 "rw_mbytes_per_sec": 0, 00:03:47.020 "r_mbytes_per_sec": 0, 00:03:47.020 "w_mbytes_per_sec": 0 00:03:47.020 }, 00:03:47.020 "claimed": false, 00:03:47.020 "zoned": false, 00:03:47.020 "supported_io_types": { 00:03:47.020 "read": true, 00:03:47.020 "write": true, 00:03:47.020 "unmap": true, 00:03:47.020 "write_zeroes": true, 00:03:47.020 "flush": true, 00:03:47.020 "reset": true, 00:03:47.020 "compare": false, 00:03:47.020 "compare_and_write": false, 00:03:47.020 "abort": true, 00:03:47.020 "nvme_admin": false, 00:03:47.020 "nvme_io": false 00:03:47.020 }, 00:03:47.020 "memory_domains": [ 00:03:47.020 { 00:03:47.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.020 "dma_device_type": 2 00:03:47.020 } 00:03:47.020 ], 00:03:47.020 "driver_specific": {} 00:03:47.020 } 00:03:47.020 ]' 00:03:47.021 18:02:05 -- rpc/rpc.sh@17 -- # jq length 00:03:47.021 18:02:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.021 18:02:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.021 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.021 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 [2024-11-18 18:02:05.552854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.021 [2024-11-18 18:02:05.552918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.021 [2024-11-18 18:02:05.552969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ad4c0 00:03:47.021 [2024-11-18 18:02:05.552977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.021 [2024-11-18 18:02:05.554496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.021 [2024-11-18 18:02:05.554570] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.021 Passthru0 00:03:47.021 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.021 18:02:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.021 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.021 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.021 18:02:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.021 { 00:03:47.021 "name": "Malloc0", 00:03:47.021 "aliases": [ 00:03:47.021 "fe8f20a6-a521-4299-87e6-d0be4f4ed356" 00:03:47.021 ], 00:03:47.021 "product_name": "Malloc disk", 00:03:47.021 "block_size": 512, 00:03:47.021 "num_blocks": 16384, 00:03:47.021 "uuid": "fe8f20a6-a521-4299-87e6-d0be4f4ed356", 00:03:47.021 "assigned_rate_limits": { 00:03:47.021 "rw_ios_per_sec": 0, 00:03:47.021 "rw_mbytes_per_sec": 0, 00:03:47.021 "r_mbytes_per_sec": 0, 00:03:47.021 "w_mbytes_per_sec": 0 00:03:47.021 }, 00:03:47.021 "claimed": true, 00:03:47.021 "claim_type": "exclusive_write", 00:03:47.021 "zoned": false, 00:03:47.021 "supported_io_types": { 00:03:47.021 "read": true, 00:03:47.021 "write": true, 00:03:47.021 "unmap": true, 00:03:47.021 "write_zeroes": true, 00:03:47.021 "flush": true, 00:03:47.021 "reset": true, 00:03:47.021 "compare": false, 00:03:47.021 "compare_and_write": false, 00:03:47.021 "abort": true, 00:03:47.021 "nvme_admin": false, 00:03:47.021 "nvme_io": false 00:03:47.021 }, 00:03:47.021 "memory_domains": [ 00:03:47.021 { 00:03:47.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.021 "dma_device_type": 2 00:03:47.021 } 00:03:47.021 ], 00:03:47.021 "driver_specific": {} 00:03:47.021 }, 00:03:47.021 { 00:03:47.021 "name": "Passthru0", 00:03:47.021 "aliases": [ 00:03:47.021 "af7c7650-4268-56f0-b73d-d16504422abf" 00:03:47.021 ], 00:03:47.021 "product_name": "passthru", 00:03:47.021 "block_size": 512, 00:03:47.021 "num_blocks": 16384, 00:03:47.021 "uuid": "af7c7650-4268-56f0-b73d-d16504422abf", 00:03:47.021 "assigned_rate_limits": { 00:03:47.021 "rw_ios_per_sec": 0, 00:03:47.021 "rw_mbytes_per_sec": 0, 00:03:47.021 "r_mbytes_per_sec": 0, 00:03:47.021 "w_mbytes_per_sec": 0 00:03:47.021 }, 00:03:47.021 "claimed": false, 00:03:47.021 "zoned": false, 00:03:47.021 "supported_io_types": { 00:03:47.021 "read": true, 00:03:47.021 "write": true, 00:03:47.021 "unmap": true, 00:03:47.021 "write_zeroes": true, 00:03:47.021 "flush": true, 00:03:47.021 "reset": true, 00:03:47.021 "compare": false, 00:03:47.021 "compare_and_write": false, 00:03:47.021 "abort": true, 00:03:47.021 "nvme_admin": false, 00:03:47.021 "nvme_io": false 00:03:47.021 }, 00:03:47.021 "memory_domains": [ 00:03:47.021 { 00:03:47.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.021 "dma_device_type": 2 00:03:47.021 } 00:03:47.021 ], 00:03:47.021 "driver_specific": { 00:03:47.021 "passthru": { 00:03:47.021 "name": "Passthru0", 00:03:47.021 "base_bdev_name": "Malloc0" 00:03:47.021 } 00:03:47.021 } 00:03:47.021 } 00:03:47.021 ]' 00:03:47.021 18:02:05 -- rpc/rpc.sh@21 -- # jq length 00:03:47.280 18:02:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.280 18:02:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.280 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.280 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.280 18:02:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.280 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.280 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.280 18:02:05 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.280 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.280 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.280 18:02:05 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.280 18:02:05 -- rpc/rpc.sh@26 -- # jq length 00:03:47.280 18:02:05 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.280 00:03:47.280 real 0m0.308s 00:03:47.280 user 0m0.204s 00:03:47.280 sys 0m0.036s 00:03:47.280 18:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.280 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 ************************************ 00:03:47.280 END TEST rpc_integrity 00:03:47.280 ************************************ 00:03:47.280 18:02:05 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.280 18:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.280 18:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.280 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 ************************************ 00:03:47.280 START TEST rpc_plugins 00:03:47.281 ************************************ 00:03:47.281 18:02:05 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:03:47.281 18:02:05 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.281 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.281 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.281 18:02:05 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.281 18:02:05 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.281 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.281 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.281 18:02:05 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.281 { 00:03:47.281 "name": "Malloc1", 00:03:47.281 "aliases": [ 00:03:47.281 "31b84d35-8534-40c5-bd02-20239ebac009" 00:03:47.281 ], 00:03:47.281 "product_name": "Malloc disk", 00:03:47.281 "block_size": 4096, 00:03:47.281 "num_blocks": 256, 00:03:47.281 "uuid": "31b84d35-8534-40c5-bd02-20239ebac009", 00:03:47.281 "assigned_rate_limits": { 00:03:47.281 "rw_ios_per_sec": 0, 00:03:47.281 "rw_mbytes_per_sec": 0, 00:03:47.281 "r_mbytes_per_sec": 0, 00:03:47.281 "w_mbytes_per_sec": 0 00:03:47.281 }, 00:03:47.281 "claimed": false, 00:03:47.281 "zoned": false, 00:03:47.281 "supported_io_types": { 00:03:47.281 "read": true, 00:03:47.281 "write": true, 00:03:47.281 "unmap": true, 00:03:47.281 "write_zeroes": true, 00:03:47.281 "flush": true, 00:03:47.281 "reset": true, 00:03:47.281 "compare": false, 00:03:47.281 "compare_and_write": false, 00:03:47.281 "abort": true, 00:03:47.281 "nvme_admin": false, 00:03:47.281 "nvme_io": false 00:03:47.281 }, 00:03:47.281 "memory_domains": [ 00:03:47.281 { 00:03:47.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.281 "dma_device_type": 2 00:03:47.281 } 00:03:47.281 ], 00:03:47.281 "driver_specific": {} 00:03:47.281 } 00:03:47.281 ]' 00:03:47.281 18:02:05 -- rpc/rpc.sh@32 -- # jq length 00:03:47.281 18:02:05 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.281 18:02:05 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.281 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.281 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.281 18:02:05 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.281 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.281 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.281 18:02:05 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.281 18:02:05 -- rpc/rpc.sh@36 -- # jq length 00:03:47.540 18:02:05 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.540 00:03:47.540 real 0m0.150s 00:03:47.540 user 0m0.098s 00:03:47.540 sys 0m0.017s 00:03:47.540 18:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.540 ************************************ 00:03:47.540 END TEST rpc_plugins 00:03:47.540 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.540 ************************************ 00:03:47.540 18:02:05 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.540 18:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.540 18:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.540 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.540 ************************************ 00:03:47.540 START TEST rpc_trace_cmd_test 00:03:47.540 ************************************ 00:03:47.540 18:02:05 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:03:47.540 18:02:05 -- rpc/rpc.sh@40 -- # local info 00:03:47.540 18:02:05 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:47.540 18:02:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.540 18:02:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.540 18:02:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.540 18:02:05 -- rpc/rpc.sh@42 -- # info='{ 00:03:47.540 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53824", 00:03:47.540 "tpoint_group_mask": "0x8", 00:03:47.540 "iscsi_conn": { 00:03:47.540 "mask": "0x2", 00:03:47.540 "tpoint_mask": "0x0" 00:03:47.540 }, 00:03:47.540 "scsi": { 00:03:47.540 "mask": "0x4", 00:03:47.540 "tpoint_mask": "0x0" 00:03:47.540 }, 00:03:47.540 "bdev": { 00:03:47.540 "mask": "0x8", 00:03:47.540 "tpoint_mask": "0xffffffffffffffff" 00:03:47.540 }, 00:03:47.540 "nvmf_rdma": { 00:03:47.540 "mask": "0x10", 00:03:47.540 "tpoint_mask": "0x0" 00:03:47.540 }, 00:03:47.540 "nvmf_tcp": { 00:03:47.540 "mask": "0x20", 00:03:47.540 "tpoint_mask": "0x0" 00:03:47.540 }, 00:03:47.541 "ftl": { 00:03:47.541 "mask": "0x40", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "blobfs": { 00:03:47.541 "mask": "0x80", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "dsa": { 00:03:47.541 "mask": "0x200", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "thread": { 00:03:47.541 "mask": "0x400", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "nvme_pcie": { 00:03:47.541 "mask": "0x800", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "iaa": { 00:03:47.541 "mask": "0x1000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "nvme_tcp": { 00:03:47.541 "mask": "0x2000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "bdev_nvme": { 00:03:47.541 "mask": "0x4000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 } 00:03:47.541 }' 00:03:47.541 18:02:05 -- rpc/rpc.sh@43 -- # jq length 00:03:47.541 18:02:06 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:47.541 18:02:06 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:47.541 18:02:06 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:47.541 18:02:06 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:47.541 18:02:06 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:47.541 18:02:06 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.800 18:02:06 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.800 18:02:06 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.800 18:02:06 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.800 00:03:47.800 real 0m0.261s 00:03:47.800 user 0m0.224s 00:03:47.800 sys 0m0.027s 00:03:47.800 18:02:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:47.800 ************************************ 00:03:47.800 END TEST rpc_trace_cmd_test 00:03:47.800 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 ************************************ 00:03:47.800 18:02:06 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.800 18:02:06 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.800 18:02:06 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.800 18:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.800 18:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.800 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 ************************************ 00:03:47.800 START TEST rpc_daemon_integrity 00:03:47.800 ************************************ 00:03:47.800 18:02:06 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:03:47.800 18:02:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.800 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.800 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.800 18:02:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.800 18:02:06 -- rpc/rpc.sh@13 -- # jq length 00:03:47.800 18:02:06 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.800 18:02:06 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.800 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.800 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.800 18:02:06 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.800 18:02:06 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.800 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.800 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.800 18:02:06 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.800 { 00:03:47.800 "name": "Malloc2", 00:03:47.800 "aliases": [ 00:03:47.800 "c29e3fe6-ff22-4d6d-a1a1-b2450b8b02e8" 00:03:47.800 ], 00:03:47.800 "product_name": "Malloc disk", 00:03:47.800 "block_size": 512, 00:03:47.800 "num_blocks": 16384, 00:03:47.800 "uuid": "c29e3fe6-ff22-4d6d-a1a1-b2450b8b02e8", 00:03:47.800 "assigned_rate_limits": { 00:03:47.800 "rw_ios_per_sec": 0, 00:03:47.800 "rw_mbytes_per_sec": 0, 00:03:47.800 "r_mbytes_per_sec": 0, 00:03:47.800 "w_mbytes_per_sec": 0 00:03:47.800 }, 00:03:47.800 "claimed": false, 00:03:47.800 "zoned": false, 00:03:47.800 "supported_io_types": { 00:03:47.800 "read": true, 00:03:47.800 "write": true, 00:03:47.800 "unmap": true, 00:03:47.800 "write_zeroes": true, 00:03:47.800 "flush": true, 00:03:47.800 "reset": true, 00:03:47.800 "compare": false, 00:03:47.800 "compare_and_write": false, 00:03:47.800 "abort": true, 00:03:47.800 "nvme_admin": false, 00:03:47.800 "nvme_io": false 00:03:47.800 }, 00:03:47.800 "memory_domains": [ 00:03:47.800 { 00:03:47.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.800 "dma_device_type": 2 00:03:47.800 } 00:03:47.800 ], 00:03:47.800 "driver_specific": {} 00:03:47.800 } 00:03:47.800 ]' 00:03:47.800 18:02:06 -- rpc/rpc.sh@17 -- # jq length 00:03:48.060 18:02:06 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.060 18:02:06 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.060 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.060 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.060 [2024-11-18 18:02:06.425240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.060 [2024-11-18 18:02:06.425313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.060 [2024-11-18 18:02:06.425344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9adc40 00:03:48.060 [2024-11-18 18:02:06.425351] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.060 [2024-11-18 18:02:06.426690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.060 [2024-11-18 18:02:06.426735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.060 Passthru0 00:03:48.060 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.060 18:02:06 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.060 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.060 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.060 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.060 18:02:06 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.060 { 00:03:48.060 "name": "Malloc2", 00:03:48.060 "aliases": [ 00:03:48.060 "c29e3fe6-ff22-4d6d-a1a1-b2450b8b02e8" 00:03:48.060 ], 00:03:48.060 "product_name": "Malloc disk", 00:03:48.060 "block_size": 512, 00:03:48.060 "num_blocks": 16384, 00:03:48.060 "uuid": "c29e3fe6-ff22-4d6d-a1a1-b2450b8b02e8", 00:03:48.060 "assigned_rate_limits": { 00:03:48.060 "rw_ios_per_sec": 0, 00:03:48.060 "rw_mbytes_per_sec": 0, 00:03:48.060 "r_mbytes_per_sec": 0, 00:03:48.060 "w_mbytes_per_sec": 0 00:03:48.060 }, 00:03:48.060 "claimed": true, 00:03:48.060 "claim_type": "exclusive_write", 00:03:48.060 "zoned": false, 00:03:48.060 "supported_io_types": { 00:03:48.060 "read": true, 00:03:48.060 "write": true, 00:03:48.060 "unmap": true, 00:03:48.060 "write_zeroes": true, 00:03:48.060 "flush": true, 00:03:48.060 "reset": true, 00:03:48.060 "compare": false, 00:03:48.060 "compare_and_write": false, 00:03:48.060 "abort": true, 00:03:48.060 "nvme_admin": false, 00:03:48.060 "nvme_io": false 00:03:48.060 }, 00:03:48.060 "memory_domains": [ 00:03:48.060 { 00:03:48.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.060 "dma_device_type": 2 00:03:48.060 } 00:03:48.060 ], 00:03:48.060 "driver_specific": {} 00:03:48.060 }, 00:03:48.060 { 00:03:48.060 "name": "Passthru0", 00:03:48.060 "aliases": [ 00:03:48.060 "87bf3a73-8456-5b58-98a6-277756271d1f" 00:03:48.060 ], 00:03:48.060 "product_name": "passthru", 00:03:48.060 "block_size": 512, 00:03:48.060 "num_blocks": 16384, 00:03:48.060 "uuid": "87bf3a73-8456-5b58-98a6-277756271d1f", 00:03:48.060 "assigned_rate_limits": { 00:03:48.060 "rw_ios_per_sec": 0, 00:03:48.060 "rw_mbytes_per_sec": 0, 00:03:48.060 "r_mbytes_per_sec": 0, 00:03:48.060 "w_mbytes_per_sec": 0 00:03:48.060 }, 00:03:48.060 "claimed": false, 00:03:48.060 "zoned": false, 00:03:48.060 "supported_io_types": { 00:03:48.060 "read": true, 00:03:48.060 "write": true, 00:03:48.060 "unmap": true, 00:03:48.060 "write_zeroes": true, 00:03:48.060 "flush": true, 00:03:48.060 "reset": true, 00:03:48.060 "compare": false, 00:03:48.060 "compare_and_write": false, 00:03:48.060 "abort": true, 00:03:48.060 "nvme_admin": false, 00:03:48.060 "nvme_io": false 00:03:48.060 }, 00:03:48.060 "memory_domains": [ 00:03:48.060 { 00:03:48.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.060 "dma_device_type": 2 00:03:48.060 } 00:03:48.060 ], 00:03:48.060 "driver_specific": { 00:03:48.060 "passthru": { 00:03:48.060 "name": "Passthru0", 00:03:48.060 "base_bdev_name": "Malloc2" 00:03:48.060 } 00:03:48.060 } 00:03:48.060 } 00:03:48.060 ]' 00:03:48.060 18:02:06 -- rpc/rpc.sh@21 -- # jq length 00:03:48.060 18:02:06 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.060 18:02:06 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.060 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.060 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.060 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.060 18:02:06 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.060 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.060 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.060 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.060 18:02:06 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.060 18:02:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.060 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.060 18:02:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.060 18:02:06 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.061 18:02:06 -- rpc/rpc.sh@26 -- # jq length 00:03:48.061 18:02:06 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.061 00:03:48.061 real 0m0.321s 00:03:48.061 user 0m0.214s 00:03:48.061 sys 0m0.042s 00:03:48.061 18:02:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.061 ************************************ 00:03:48.061 END TEST rpc_daemon_integrity 00:03:48.061 ************************************ 00:03:48.061 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.061 18:02:06 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.061 18:02:06 -- rpc/rpc.sh@84 -- # killprocess 53824 00:03:48.061 18:02:06 -- common/autotest_common.sh@936 -- # '[' -z 53824 ']' 00:03:48.061 18:02:06 -- common/autotest_common.sh@940 -- # kill -0 53824 00:03:48.061 18:02:06 -- common/autotest_common.sh@941 -- # uname 00:03:48.061 18:02:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:48.061 18:02:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53824 00:03:48.321 18:02:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:48.321 18:02:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:48.321 killing process with pid 53824 00:03:48.321 18:02:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53824' 00:03:48.321 18:02:06 -- common/autotest_common.sh@955 -- # kill 53824 00:03:48.321 18:02:06 -- common/autotest_common.sh@960 -- # wait 53824 00:03:48.581 ************************************ 00:03:48.581 END TEST rpc 00:03:48.581 ************************************ 00:03:48.581 00:03:48.581 real 0m2.764s 00:03:48.581 user 0m3.683s 00:03:48.581 sys 0m0.570s 00:03:48.581 18:02:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.581 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.581 18:02:06 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.581 18:02:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.581 18:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.581 18:02:06 -- common/autotest_common.sh@10 -- # set +x 00:03:48.581 ************************************ 00:03:48.581 START TEST rpc_client 00:03:48.581 ************************************ 00:03:48.581 18:02:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.581 * Looking for test storage... 00:03:48.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:48.581 18:02:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.581 18:02:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.581 18:02:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.581 18:02:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.581 18:02:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.581 18:02:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.581 18:02:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.581 18:02:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.581 18:02:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.581 18:02:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.581 18:02:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.581 18:02:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.581 18:02:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.581 18:02:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.581 18:02:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.581 18:02:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.581 18:02:07 -- scripts/common.sh@344 -- # : 1 00:03:48.581 18:02:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.581 18:02:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.581 18:02:07 -- scripts/common.sh@364 -- # decimal 1 00:03:48.581 18:02:07 -- scripts/common.sh@352 -- # local d=1 00:03:48.581 18:02:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.581 18:02:07 -- scripts/common.sh@354 -- # echo 1 00:03:48.581 18:02:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.581 18:02:07 -- scripts/common.sh@365 -- # decimal 2 00:03:48.581 18:02:07 -- scripts/common.sh@352 -- # local d=2 00:03:48.581 18:02:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.581 18:02:07 -- scripts/common.sh@354 -- # echo 2 00:03:48.581 18:02:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.581 18:02:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.581 18:02:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.581 18:02:07 -- scripts/common.sh@367 -- # return 0 00:03:48.581 18:02:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.581 18:02:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.581 --rc genhtml_branch_coverage=1 00:03:48.581 --rc genhtml_function_coverage=1 00:03:48.581 --rc genhtml_legend=1 00:03:48.581 --rc geninfo_all_blocks=1 00:03:48.581 --rc geninfo_unexecuted_blocks=1 00:03:48.581 00:03:48.581 ' 00:03:48.581 18:02:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.581 --rc genhtml_branch_coverage=1 00:03:48.581 --rc genhtml_function_coverage=1 00:03:48.581 --rc genhtml_legend=1 00:03:48.581 --rc geninfo_all_blocks=1 00:03:48.581 --rc geninfo_unexecuted_blocks=1 00:03:48.581 00:03:48.581 ' 00:03:48.581 18:02:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.581 --rc genhtml_branch_coverage=1 00:03:48.581 --rc genhtml_function_coverage=1 00:03:48.581 --rc genhtml_legend=1 00:03:48.581 --rc geninfo_all_blocks=1 00:03:48.581 --rc geninfo_unexecuted_blocks=1 00:03:48.581 00:03:48.581 ' 00:03:48.581 18:02:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.581 --rc genhtml_branch_coverage=1 00:03:48.581 --rc genhtml_function_coverage=1 00:03:48.581 --rc genhtml_legend=1 00:03:48.581 --rc geninfo_all_blocks=1 00:03:48.581 --rc geninfo_unexecuted_blocks=1 00:03:48.581 00:03:48.581 ' 00:03:48.581 18:02:07 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:48.581 OK 00:03:48.581 18:02:07 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.581 00:03:48.581 real 0m0.171s 00:03:48.581 user 0m0.101s 00:03:48.581 sys 0m0.081s 00:03:48.581 18:02:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.581 18:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:48.581 ************************************ 00:03:48.581 END TEST rpc_client 00:03:48.581 ************************************ 00:03:48.841 18:02:07 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.841 18:02:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.841 18:02:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.841 18:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:48.841 ************************************ 00:03:48.841 START TEST json_config 00:03:48.841 ************************************ 00:03:48.841 18:02:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.841 18:02:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:48.841 18:02:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:48.841 18:02:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:48.841 18:02:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:48.841 18:02:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:48.841 18:02:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:48.841 18:02:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:48.841 18:02:07 -- scripts/common.sh@335 -- # IFS=.-: 00:03:48.841 18:02:07 -- scripts/common.sh@335 -- # read -ra ver1 00:03:48.841 18:02:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.841 18:02:07 -- scripts/common.sh@336 -- # read -ra ver2 00:03:48.841 18:02:07 -- scripts/common.sh@337 -- # local 'op=<' 00:03:48.841 18:02:07 -- scripts/common.sh@339 -- # ver1_l=2 00:03:48.841 18:02:07 -- scripts/common.sh@340 -- # ver2_l=1 00:03:48.841 18:02:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:48.841 18:02:07 -- scripts/common.sh@343 -- # case "$op" in 00:03:48.841 18:02:07 -- scripts/common.sh@344 -- # : 1 00:03:48.841 18:02:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:48.841 18:02:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.841 18:02:07 -- scripts/common.sh@364 -- # decimal 1 00:03:48.841 18:02:07 -- scripts/common.sh@352 -- # local d=1 00:03:48.841 18:02:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.841 18:02:07 -- scripts/common.sh@354 -- # echo 1 00:03:48.841 18:02:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:48.841 18:02:07 -- scripts/common.sh@365 -- # decimal 2 00:03:48.841 18:02:07 -- scripts/common.sh@352 -- # local d=2 00:03:48.841 18:02:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.841 18:02:07 -- scripts/common.sh@354 -- # echo 2 00:03:48.841 18:02:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:48.841 18:02:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:48.841 18:02:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:48.842 18:02:07 -- scripts/common.sh@367 -- # return 0 00:03:48.842 18:02:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.842 18:02:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.842 --rc genhtml_branch_coverage=1 00:03:48.842 --rc genhtml_function_coverage=1 00:03:48.842 --rc genhtml_legend=1 00:03:48.842 --rc geninfo_all_blocks=1 00:03:48.842 --rc geninfo_unexecuted_blocks=1 00:03:48.842 00:03:48.842 ' 00:03:48.842 18:02:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.842 --rc genhtml_branch_coverage=1 00:03:48.842 --rc genhtml_function_coverage=1 00:03:48.842 --rc genhtml_legend=1 00:03:48.842 --rc geninfo_all_blocks=1 00:03:48.842 --rc geninfo_unexecuted_blocks=1 00:03:48.842 00:03:48.842 ' 00:03:48.842 18:02:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.842 --rc genhtml_branch_coverage=1 00:03:48.842 --rc genhtml_function_coverage=1 00:03:48.842 --rc genhtml_legend=1 00:03:48.842 --rc geninfo_all_blocks=1 00:03:48.842 --rc geninfo_unexecuted_blocks=1 00:03:48.842 00:03:48.842 ' 00:03:48.842 18:02:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:48.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.842 --rc genhtml_branch_coverage=1 00:03:48.842 --rc genhtml_function_coverage=1 00:03:48.842 --rc genhtml_legend=1 00:03:48.842 --rc geninfo_all_blocks=1 00:03:48.842 --rc geninfo_unexecuted_blocks=1 00:03:48.842 00:03:48.842 ' 00:03:48.842 18:02:07 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.842 18:02:07 -- nvmf/common.sh@7 -- # uname -s 00:03:48.842 18:02:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.842 18:02:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.842 18:02:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.842 18:02:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.842 18:02:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.842 18:02:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.842 18:02:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.842 18:02:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.842 18:02:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.842 18:02:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.842 18:02:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:03:48.842 18:02:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:03:48.842 18:02:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.842 18:02:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.842 18:02:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.842 18:02:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.842 18:02:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.842 18:02:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.842 18:02:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.842 18:02:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.842 18:02:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.842 18:02:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.842 18:02:07 -- paths/export.sh@5 -- # export PATH 00:03:48.842 18:02:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.842 18:02:07 -- nvmf/common.sh@46 -- # : 0 00:03:48.842 18:02:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:48.842 18:02:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:48.842 18:02:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:48.842 18:02:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.842 18:02:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.842 18:02:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:48.842 18:02:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:48.842 18:02:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:48.842 18:02:07 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.842 18:02:07 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.842 18:02:07 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:48.842 18:02:07 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.842 18:02:07 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:48.842 INFO: JSON configuration test init 00:03:48.842 18:02:07 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.842 18:02:07 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:48.842 18:02:07 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:48.842 18:02:07 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:48.842 18:02:07 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:48.842 18:02:07 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.842 18:02:07 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:48.842 18:02:07 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:48.842 18:02:07 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:48.842 18:02:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.842 18:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:48.842 18:02:07 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:48.842 18:02:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.842 18:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:48.842 Waiting for target to run... 00:03:48.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.842 18:02:07 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.842 18:02:07 -- json_config/json_config.sh@98 -- # local app=target 00:03:48.842 18:02:07 -- json_config/json_config.sh@99 -- # shift 00:03:48.842 18:02:07 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:48.842 18:02:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.842 18:02:07 -- json_config/json_config.sh@111 -- # app_pid[$app]=54077 00:03:48.842 18:02:07 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:48.842 18:02:07 -- json_config/json_config.sh@114 -- # waitforlisten 54077 /var/tmp/spdk_tgt.sock 00:03:48.842 18:02:07 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.842 18:02:07 -- common/autotest_common.sh@829 -- # '[' -z 54077 ']' 00:03:48.842 18:02:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.842 18:02:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.842 18:02:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.842 18:02:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.842 18:02:07 -- common/autotest_common.sh@10 -- # set +x 00:03:49.101 [2024-11-18 18:02:07.466937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:49.101 [2024-11-18 18:02:07.467026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54077 ] 00:03:49.360 [2024-11-18 18:02:07.754285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.360 [2024-11-18 18:02:07.789812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:49.360 [2024-11-18 18:02:07.789974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.928 00:03:49.928 18:02:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:49.928 18:02:08 -- common/autotest_common.sh@862 -- # return 0 00:03:49.928 18:02:08 -- json_config/json_config.sh@115 -- # echo '' 00:03:49.928 18:02:08 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:49.928 18:02:08 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:49.928 18:02:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.928 18:02:08 -- common/autotest_common.sh@10 -- # set +x 00:03:49.928 18:02:08 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:49.928 18:02:08 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:49.928 18:02:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.928 18:02:08 -- common/autotest_common.sh@10 -- # set +x 00:03:49.928 18:02:08 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:50.193 18:02:08 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:50.193 18:02:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:50.481 18:02:08 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:50.481 18:02:08 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:50.481 18:02:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.481 18:02:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.481 18:02:08 -- json_config/json_config.sh@48 -- # local ret=0 00:03:50.481 18:02:08 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:50.481 18:02:08 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:50.481 18:02:08 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:50.481 18:02:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:50.481 18:02:08 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:50.759 18:02:09 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:50.759 18:02:09 -- json_config/json_config.sh@51 -- # local get_types 00:03:50.759 18:02:09 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:50.759 18:02:09 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:50.759 18:02:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.759 18:02:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.759 18:02:09 -- json_config/json_config.sh@58 -- # return 0 00:03:50.759 18:02:09 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:50.759 18:02:09 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:50.759 18:02:09 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:50.759 18:02:09 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:50.759 18:02:09 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:50.759 18:02:09 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:50.760 18:02:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.760 18:02:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.760 18:02:09 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:50.760 18:02:09 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:50.760 18:02:09 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:50.760 18:02:09 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:50.760 18:02:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.028 MallocForNvmf0 00:03:51.028 18:02:09 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.028 18:02:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.287 MallocForNvmf1 00:03:51.287 18:02:09 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.287 18:02:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.546 [2024-11-18 18:02:10.013848] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.546 18:02:10 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:51.546 18:02:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:51.806 18:02:10 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:51.806 18:02:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.065 18:02:10 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.065 18:02:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.331 18:02:10 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.331 18:02:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.331 [2024-11-18 18:02:10.886368] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:52.331 18:02:10 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:52.331 18:02:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.331 18:02:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.591 18:02:10 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:52.591 18:02:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.591 18:02:10 -- common/autotest_common.sh@10 -- # set +x 00:03:52.591 18:02:10 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:52.591 18:02:10 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.591 18:02:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.591 MallocBdevForConfigChangeCheck 00:03:52.850 18:02:11 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:52.850 18:02:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.850 18:02:11 -- common/autotest_common.sh@10 -- # set +x 00:03:52.850 18:02:11 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:52.850 18:02:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.109 INFO: shutting down applications... 00:03:53.109 18:02:11 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:53.109 18:02:11 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:53.109 18:02:11 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:53.109 18:02:11 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:53.109 18:02:11 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.368 Calling clear_iscsi_subsystem 00:03:53.368 Calling clear_nvmf_subsystem 00:03:53.368 Calling clear_nbd_subsystem 00:03:53.368 Calling clear_ublk_subsystem 00:03:53.368 Calling clear_vhost_blk_subsystem 00:03:53.368 Calling clear_vhost_scsi_subsystem 00:03:53.368 Calling clear_scheduler_subsystem 00:03:53.368 Calling clear_bdev_subsystem 00:03:53.368 Calling clear_accel_subsystem 00:03:53.368 Calling clear_vmd_subsystem 00:03:53.368 Calling clear_sock_subsystem 00:03:53.368 Calling clear_iobuf_subsystem 00:03:53.368 18:02:11 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:53.368 18:02:11 -- json_config/json_config.sh@396 -- # count=100 00:03:53.368 18:02:11 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:53.368 18:02:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.368 18:02:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:53.368 18:02:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:53.936 18:02:12 -- json_config/json_config.sh@398 -- # break 00:03:53.936 18:02:12 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:53.936 18:02:12 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:53.936 18:02:12 -- json_config/json_config.sh@120 -- # local app=target 00:03:53.936 18:02:12 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:53.936 18:02:12 -- json_config/json_config.sh@124 -- # [[ -n 54077 ]] 00:03:53.936 18:02:12 -- json_config/json_config.sh@127 -- # kill -SIGINT 54077 00:03:53.936 18:02:12 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:53.936 18:02:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:53.936 18:02:12 -- json_config/json_config.sh@130 -- # kill -0 54077 00:03:53.936 18:02:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:54.195 18:02:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:54.195 18:02:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.195 18:02:12 -- json_config/json_config.sh@130 -- # kill -0 54077 00:03:54.195 SPDK target shutdown done 00:03:54.195 INFO: relaunching applications... 00:03:54.195 18:02:12 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:54.195 18:02:12 -- json_config/json_config.sh@132 -- # break 00:03:54.195 18:02:12 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:54.195 18:02:12 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:54.195 18:02:12 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:54.195 18:02:12 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.195 18:02:12 -- json_config/json_config.sh@98 -- # local app=target 00:03:54.195 18:02:12 -- json_config/json_config.sh@99 -- # shift 00:03:54.195 18:02:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:54.195 18:02:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:54.195 18:02:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:54.195 18:02:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.195 18:02:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.195 18:02:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=54262 00:03:54.195 18:02:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:54.195 Waiting for target to run... 00:03:54.195 18:02:12 -- json_config/json_config.sh@114 -- # waitforlisten 54262 /var/tmp/spdk_tgt.sock 00:03:54.195 18:02:12 -- common/autotest_common.sh@829 -- # '[' -z 54262 ']' 00:03:54.195 18:02:12 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.195 18:02:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.195 18:02:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:54.195 18:02:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.195 18:02:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:54.195 18:02:12 -- common/autotest_common.sh@10 -- # set +x 00:03:54.453 [2024-11-18 18:02:12.821833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:54.453 [2024-11-18 18:02:12.822805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54262 ] 00:03:54.712 [2024-11-18 18:02:13.130890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.712 [2024-11-18 18:02:13.171834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:54.712 [2024-11-18 18:02:13.172017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.971 [2024-11-18 18:02:13.472061] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.971 [2024-11-18 18:02:13.504115] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.230 00:03:55.230 INFO: Checking if target configuration is the same... 00:03:55.230 18:02:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:55.230 18:02:13 -- common/autotest_common.sh@862 -- # return 0 00:03:55.230 18:02:13 -- json_config/json_config.sh@115 -- # echo '' 00:03:55.230 18:02:13 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:55.230 18:02:13 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.230 18:02:13 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.230 18:02:13 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:55.230 18:02:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.230 + '[' 2 -ne 2 ']' 00:03:55.488 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:55.488 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:55.488 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:55.488 +++ basename /dev/fd/62 00:03:55.488 ++ mktemp /tmp/62.XXX 00:03:55.488 + tmp_file_1=/tmp/62.v9b 00:03:55.488 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.488 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.488 + tmp_file_2=/tmp/spdk_tgt_config.json.fOm 00:03:55.488 + ret=0 00:03:55.488 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:55.747 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:55.747 + diff -u /tmp/62.v9b /tmp/spdk_tgt_config.json.fOm 00:03:55.747 INFO: JSON config files are the same 00:03:55.747 + echo 'INFO: JSON config files are the same' 00:03:55.747 + rm /tmp/62.v9b /tmp/spdk_tgt_config.json.fOm 00:03:55.747 + exit 0 00:03:55.747 INFO: changing configuration and checking if this can be detected... 00:03:55.747 18:02:14 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:55.747 18:02:14 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:55.747 18:02:14 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.747 18:02:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.005 18:02:14 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.005 18:02:14 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:56.005 18:02:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.005 + '[' 2 -ne 2 ']' 00:03:56.005 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.005 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.005 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:56.005 +++ basename /dev/fd/62 00:03:56.005 ++ mktemp /tmp/62.XXX 00:03:56.005 + tmp_file_1=/tmp/62.q6K 00:03:56.005 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.006 + tmp_file_2=/tmp/spdk_tgt_config.json.4FE 00:03:56.006 + ret=0 00:03:56.006 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.573 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.573 + diff -u /tmp/62.q6K /tmp/spdk_tgt_config.json.4FE 00:03:56.573 + ret=1 00:03:56.573 + echo '=== Start of file: /tmp/62.q6K ===' 00:03:56.573 + cat /tmp/62.q6K 00:03:56.573 + echo '=== End of file: /tmp/62.q6K ===' 00:03:56.573 + echo '' 00:03:56.573 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4FE ===' 00:03:56.573 + cat /tmp/spdk_tgt_config.json.4FE 00:03:56.573 + echo '=== End of file: /tmp/spdk_tgt_config.json.4FE ===' 00:03:56.573 + echo '' 00:03:56.573 + rm /tmp/62.q6K /tmp/spdk_tgt_config.json.4FE 00:03:56.573 + exit 1 00:03:56.573 INFO: configuration change detected. 00:03:56.573 18:02:15 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:56.573 18:02:15 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:56.573 18:02:15 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:56.573 18:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.573 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.573 18:02:15 -- json_config/json_config.sh@360 -- # local ret=0 00:03:56.573 18:02:15 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:56.573 18:02:15 -- json_config/json_config.sh@370 -- # [[ -n 54262 ]] 00:03:56.573 18:02:15 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:56.573 18:02:15 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.573 18:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.573 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.573 18:02:15 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:03:56.573 18:02:15 -- json_config/json_config.sh@246 -- # uname -s 00:03:56.573 18:02:15 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:03:56.573 18:02:15 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:03:56.573 18:02:15 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:56.573 18:02:15 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.573 18:02:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.573 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.573 18:02:15 -- json_config/json_config.sh@376 -- # killprocess 54262 00:03:56.573 18:02:15 -- common/autotest_common.sh@936 -- # '[' -z 54262 ']' 00:03:56.573 18:02:15 -- common/autotest_common.sh@940 -- # kill -0 54262 00:03:56.573 18:02:15 -- common/autotest_common.sh@941 -- # uname 00:03:56.573 18:02:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:56.573 18:02:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54262 00:03:56.573 killing process with pid 54262 00:03:56.573 18:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:56.573 18:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:56.573 18:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54262' 00:03:56.573 18:02:15 -- common/autotest_common.sh@955 -- # kill 54262 00:03:56.573 18:02:15 -- common/autotest_common.sh@960 -- # wait 54262 00:03:56.833 18:02:15 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.833 18:02:15 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:56.833 18:02:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.833 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.833 INFO: Success 00:03:56.833 18:02:15 -- json_config/json_config.sh@381 -- # return 0 00:03:56.833 18:02:15 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:56.833 00:03:56.833 real 0m8.134s 00:03:56.833 user 0m11.767s 00:03:56.833 sys 0m1.403s 00:03:56.833 18:02:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:56.833 ************************************ 00:03:56.833 END TEST json_config 00:03:56.833 ************************************ 00:03:56.833 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.833 18:02:15 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:56.833 18:02:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.833 18:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.833 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.833 ************************************ 00:03:56.833 START TEST json_config_extra_key 00:03:56.833 ************************************ 00:03:56.833 18:02:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.092 18:02:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:57.092 18:02:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:57.092 18:02:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:57.092 18:02:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:57.092 18:02:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:57.092 18:02:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:57.092 18:02:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:57.092 18:02:15 -- scripts/common.sh@335 -- # IFS=.-: 00:03:57.092 18:02:15 -- scripts/common.sh@335 -- # read -ra ver1 00:03:57.092 18:02:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.092 18:02:15 -- scripts/common.sh@336 -- # read -ra ver2 00:03:57.092 18:02:15 -- scripts/common.sh@337 -- # local 'op=<' 00:03:57.092 18:02:15 -- scripts/common.sh@339 -- # ver1_l=2 00:03:57.092 18:02:15 -- scripts/common.sh@340 -- # ver2_l=1 00:03:57.092 18:02:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:57.092 18:02:15 -- scripts/common.sh@343 -- # case "$op" in 00:03:57.092 18:02:15 -- scripts/common.sh@344 -- # : 1 00:03:57.092 18:02:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:57.092 18:02:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.092 18:02:15 -- scripts/common.sh@364 -- # decimal 1 00:03:57.092 18:02:15 -- scripts/common.sh@352 -- # local d=1 00:03:57.092 18:02:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.092 18:02:15 -- scripts/common.sh@354 -- # echo 1 00:03:57.092 18:02:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:57.092 18:02:15 -- scripts/common.sh@365 -- # decimal 2 00:03:57.092 18:02:15 -- scripts/common.sh@352 -- # local d=2 00:03:57.092 18:02:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.092 18:02:15 -- scripts/common.sh@354 -- # echo 2 00:03:57.092 18:02:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:57.092 18:02:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:57.092 18:02:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:57.092 18:02:15 -- scripts/common.sh@367 -- # return 0 00:03:57.092 18:02:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.092 18:02:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:57.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.092 --rc genhtml_branch_coverage=1 00:03:57.092 --rc genhtml_function_coverage=1 00:03:57.092 --rc genhtml_legend=1 00:03:57.092 --rc geninfo_all_blocks=1 00:03:57.092 --rc geninfo_unexecuted_blocks=1 00:03:57.092 00:03:57.092 ' 00:03:57.092 18:02:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:57.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.092 --rc genhtml_branch_coverage=1 00:03:57.092 --rc genhtml_function_coverage=1 00:03:57.092 --rc genhtml_legend=1 00:03:57.092 --rc geninfo_all_blocks=1 00:03:57.092 --rc geninfo_unexecuted_blocks=1 00:03:57.092 00:03:57.092 ' 00:03:57.092 18:02:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:57.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.092 --rc genhtml_branch_coverage=1 00:03:57.092 --rc genhtml_function_coverage=1 00:03:57.092 --rc genhtml_legend=1 00:03:57.092 --rc geninfo_all_blocks=1 00:03:57.092 --rc geninfo_unexecuted_blocks=1 00:03:57.092 00:03:57.092 ' 00:03:57.092 18:02:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:57.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.092 --rc genhtml_branch_coverage=1 00:03:57.092 --rc genhtml_function_coverage=1 00:03:57.092 --rc genhtml_legend=1 00:03:57.092 --rc geninfo_all_blocks=1 00:03:57.092 --rc geninfo_unexecuted_blocks=1 00:03:57.092 00:03:57.092 ' 00:03:57.092 18:02:15 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.092 18:02:15 -- nvmf/common.sh@7 -- # uname -s 00:03:57.092 18:02:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.092 18:02:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.092 18:02:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.093 18:02:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.093 18:02:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.093 18:02:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.093 18:02:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.093 18:02:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.093 18:02:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.093 18:02:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.093 18:02:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:03:57.093 18:02:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:03:57.093 18:02:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.093 18:02:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.093 18:02:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.093 18:02:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:57.093 18:02:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.093 18:02:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.093 18:02:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.093 18:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.093 18:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.093 18:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.093 18:02:15 -- paths/export.sh@5 -- # export PATH 00:03:57.093 18:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.093 18:02:15 -- nvmf/common.sh@46 -- # : 0 00:03:57.093 18:02:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:57.093 18:02:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:57.093 18:02:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:57.093 18:02:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.093 18:02:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.093 18:02:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:57.093 18:02:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:57.093 18:02:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:57.093 INFO: launching applications... 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54415 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:57.093 Waiting for target to run... 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:57.093 18:02:15 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54415 /var/tmp/spdk_tgt.sock 00:03:57.093 18:02:15 -- common/autotest_common.sh@829 -- # '[' -z 54415 ']' 00:03:57.093 18:02:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.093 18:02:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.093 18:02:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.093 18:02:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.093 18:02:15 -- common/autotest_common.sh@10 -- # set +x 00:03:57.093 [2024-11-18 18:02:15.631402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:57.093 [2024-11-18 18:02:15.631713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54415 ] 00:03:57.352 [2024-11-18 18:02:15.947589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.615 [2024-11-18 18:02:15.990756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:57.615 [2024-11-18 18:02:15.991214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.185 00:03:58.185 INFO: shutting down applications... 00:03:58.185 18:02:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.185 18:02:16 -- common/autotest_common.sh@862 -- # return 0 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54415 ]] 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54415 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54415 00:03:58.185 18:02:16 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54415 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@52 -- # break 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:03:58.753 SPDK target shutdown done 00:03:58.753 18:02:17 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:03:58.753 Success 00:03:58.753 ************************************ 00:03:58.753 END TEST json_config_extra_key 00:03:58.753 ************************************ 00:03:58.753 00:03:58.753 real 0m1.754s 00:03:58.753 user 0m1.644s 00:03:58.753 sys 0m0.326s 00:03:58.753 18:02:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.753 18:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:58.753 18:02:17 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.753 18:02:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.753 18:02:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.753 18:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:58.753 ************************************ 00:03:58.753 START TEST alias_rpc 00:03:58.753 ************************************ 00:03:58.753 18:02:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.753 * Looking for test storage... 00:03:58.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:58.753 18:02:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:58.753 18:02:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:58.753 18:02:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:59.013 18:02:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:59.013 18:02:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:59.013 18:02:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:59.013 18:02:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:59.013 18:02:17 -- scripts/common.sh@335 -- # IFS=.-: 00:03:59.013 18:02:17 -- scripts/common.sh@335 -- # read -ra ver1 00:03:59.013 18:02:17 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.013 18:02:17 -- scripts/common.sh@336 -- # read -ra ver2 00:03:59.013 18:02:17 -- scripts/common.sh@337 -- # local 'op=<' 00:03:59.013 18:02:17 -- scripts/common.sh@339 -- # ver1_l=2 00:03:59.013 18:02:17 -- scripts/common.sh@340 -- # ver2_l=1 00:03:59.013 18:02:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:59.013 18:02:17 -- scripts/common.sh@343 -- # case "$op" in 00:03:59.013 18:02:17 -- scripts/common.sh@344 -- # : 1 00:03:59.013 18:02:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:59.013 18:02:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.013 18:02:17 -- scripts/common.sh@364 -- # decimal 1 00:03:59.013 18:02:17 -- scripts/common.sh@352 -- # local d=1 00:03:59.013 18:02:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.013 18:02:17 -- scripts/common.sh@354 -- # echo 1 00:03:59.013 18:02:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:59.013 18:02:17 -- scripts/common.sh@365 -- # decimal 2 00:03:59.013 18:02:17 -- scripts/common.sh@352 -- # local d=2 00:03:59.013 18:02:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.013 18:02:17 -- scripts/common.sh@354 -- # echo 2 00:03:59.013 18:02:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:59.013 18:02:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:59.013 18:02:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:59.013 18:02:17 -- scripts/common.sh@367 -- # return 0 00:03:59.013 18:02:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.013 18:02:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:59.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.014 --rc genhtml_branch_coverage=1 00:03:59.014 --rc genhtml_function_coverage=1 00:03:59.014 --rc genhtml_legend=1 00:03:59.014 --rc geninfo_all_blocks=1 00:03:59.014 --rc geninfo_unexecuted_blocks=1 00:03:59.014 00:03:59.014 ' 00:03:59.014 18:02:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:59.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.014 --rc genhtml_branch_coverage=1 00:03:59.014 --rc genhtml_function_coverage=1 00:03:59.014 --rc genhtml_legend=1 00:03:59.014 --rc geninfo_all_blocks=1 00:03:59.014 --rc geninfo_unexecuted_blocks=1 00:03:59.014 00:03:59.014 ' 00:03:59.014 18:02:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:59.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.014 --rc genhtml_branch_coverage=1 00:03:59.014 --rc genhtml_function_coverage=1 00:03:59.014 --rc genhtml_legend=1 00:03:59.014 --rc geninfo_all_blocks=1 00:03:59.014 --rc geninfo_unexecuted_blocks=1 00:03:59.014 00:03:59.014 ' 00:03:59.014 18:02:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:59.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.014 --rc genhtml_branch_coverage=1 00:03:59.014 --rc genhtml_function_coverage=1 00:03:59.014 --rc genhtml_legend=1 00:03:59.014 --rc geninfo_all_blocks=1 00:03:59.014 --rc geninfo_unexecuted_blocks=1 00:03:59.014 00:03:59.014 ' 00:03:59.014 18:02:17 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.014 18:02:17 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54481 00:03:59.014 18:02:17 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.014 18:02:17 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54481 00:03:59.014 18:02:17 -- common/autotest_common.sh@829 -- # '[' -z 54481 ']' 00:03:59.014 18:02:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.014 18:02:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.014 18:02:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.014 18:02:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.014 18:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.014 [2024-11-18 18:02:17.468890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:03:59.014 [2024-11-18 18:02:17.469197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54481 ] 00:03:59.014 [2024-11-18 18:02:17.611155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.274 [2024-11-18 18:02:17.666692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:59.274 [2024-11-18 18:02:17.667123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.212 18:02:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:00.212 18:02:18 -- common/autotest_common.sh@862 -- # return 0 00:04:00.212 18:02:18 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:00.212 18:02:18 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54481 00:04:00.212 18:02:18 -- common/autotest_common.sh@936 -- # '[' -z 54481 ']' 00:04:00.212 18:02:18 -- common/autotest_common.sh@940 -- # kill -0 54481 00:04:00.212 18:02:18 -- common/autotest_common.sh@941 -- # uname 00:04:00.212 18:02:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:00.212 18:02:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54481 00:04:00.472 killing process with pid 54481 00:04:00.472 18:02:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:00.472 18:02:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:00.472 18:02:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54481' 00:04:00.472 18:02:18 -- common/autotest_common.sh@955 -- # kill 54481 00:04:00.472 18:02:18 -- common/autotest_common.sh@960 -- # wait 54481 00:04:00.742 ************************************ 00:04:00.742 END TEST alias_rpc 00:04:00.742 ************************************ 00:04:00.742 00:04:00.742 real 0m1.905s 00:04:00.742 user 0m2.302s 00:04:00.742 sys 0m0.362s 00:04:00.742 18:02:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.742 18:02:19 -- common/autotest_common.sh@10 -- # set +x 00:04:00.742 18:02:19 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:00.742 18:02:19 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:00.742 18:02:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.742 18:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.742 18:02:19 -- common/autotest_common.sh@10 -- # set +x 00:04:00.742 ************************************ 00:04:00.742 START TEST spdkcli_tcp 00:04:00.742 ************************************ 00:04:00.742 18:02:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:00.742 * Looking for test storage... 00:04:00.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:00.742 18:02:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:00.742 18:02:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:00.742 18:02:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:00.742 18:02:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:00.742 18:02:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:00.742 18:02:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:00.742 18:02:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:00.742 18:02:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:00.742 18:02:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:00.742 18:02:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.742 18:02:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:00.742 18:02:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:00.742 18:02:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:00.742 18:02:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:00.742 18:02:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:00.742 18:02:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:00.742 18:02:19 -- scripts/common.sh@344 -- # : 1 00:04:00.742 18:02:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:00.742 18:02:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.742 18:02:19 -- scripts/common.sh@364 -- # decimal 1 00:04:00.742 18:02:19 -- scripts/common.sh@352 -- # local d=1 00:04:00.742 18:02:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.742 18:02:19 -- scripts/common.sh@354 -- # echo 1 00:04:00.742 18:02:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:00.742 18:02:19 -- scripts/common.sh@365 -- # decimal 2 00:04:01.001 18:02:19 -- scripts/common.sh@352 -- # local d=2 00:04:01.001 18:02:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.001 18:02:19 -- scripts/common.sh@354 -- # echo 2 00:04:01.001 18:02:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.001 18:02:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.001 18:02:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.001 18:02:19 -- scripts/common.sh@367 -- # return 0 00:04:01.001 18:02:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.001 18:02:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.001 --rc genhtml_branch_coverage=1 00:04:01.001 --rc genhtml_function_coverage=1 00:04:01.001 --rc genhtml_legend=1 00:04:01.001 --rc geninfo_all_blocks=1 00:04:01.001 --rc geninfo_unexecuted_blocks=1 00:04:01.001 00:04:01.001 ' 00:04:01.001 18:02:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.001 --rc genhtml_branch_coverage=1 00:04:01.001 --rc genhtml_function_coverage=1 00:04:01.001 --rc genhtml_legend=1 00:04:01.001 --rc geninfo_all_blocks=1 00:04:01.001 --rc geninfo_unexecuted_blocks=1 00:04:01.001 00:04:01.001 ' 00:04:01.001 18:02:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.001 --rc genhtml_branch_coverage=1 00:04:01.001 --rc genhtml_function_coverage=1 00:04:01.001 --rc genhtml_legend=1 00:04:01.001 --rc geninfo_all_blocks=1 00:04:01.001 --rc geninfo_unexecuted_blocks=1 00:04:01.001 00:04:01.001 ' 00:04:01.001 18:02:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.001 --rc genhtml_branch_coverage=1 00:04:01.001 --rc genhtml_function_coverage=1 00:04:01.001 --rc genhtml_legend=1 00:04:01.001 --rc geninfo_all_blocks=1 00:04:01.001 --rc geninfo_unexecuted_blocks=1 00:04:01.001 00:04:01.001 ' 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:01.001 18:02:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:01.001 18:02:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.001 18:02:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.001 18:02:19 -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54564 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.001 18:02:19 -- spdkcli/tcp.sh@27 -- # waitforlisten 54564 00:04:01.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.001 18:02:19 -- common/autotest_common.sh@829 -- # '[' -z 54564 ']' 00:04:01.001 18:02:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.001 18:02:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.001 18:02:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.001 18:02:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.001 18:02:19 -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 [2024-11-18 18:02:19.419400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:01.001 [2024-11-18 18:02:19.419770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54564 ] 00:04:01.001 [2024-11-18 18:02:19.559180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.261 [2024-11-18 18:02:19.621955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:01.261 [2024-11-18 18:02:19.622503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.261 [2024-11-18 18:02:19.622510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.828 18:02:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.828 18:02:20 -- common/autotest_common.sh@862 -- # return 0 00:04:01.828 18:02:20 -- spdkcli/tcp.sh@31 -- # socat_pid=54581 00:04:01.828 18:02:20 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.828 18:02:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:02.087 [ 00:04:02.087 "bdev_malloc_delete", 00:04:02.087 "bdev_malloc_create", 00:04:02.087 "bdev_null_resize", 00:04:02.087 "bdev_null_delete", 00:04:02.087 "bdev_null_create", 00:04:02.087 "bdev_nvme_cuse_unregister", 00:04:02.087 "bdev_nvme_cuse_register", 00:04:02.087 "bdev_opal_new_user", 00:04:02.087 "bdev_opal_set_lock_state", 00:04:02.087 "bdev_opal_delete", 00:04:02.087 "bdev_opal_get_info", 00:04:02.087 "bdev_opal_create", 00:04:02.087 "bdev_nvme_opal_revert", 00:04:02.087 "bdev_nvme_opal_init", 00:04:02.087 "bdev_nvme_send_cmd", 00:04:02.087 "bdev_nvme_get_path_iostat", 00:04:02.087 "bdev_nvme_get_mdns_discovery_info", 00:04:02.087 "bdev_nvme_stop_mdns_discovery", 00:04:02.087 "bdev_nvme_start_mdns_discovery", 00:04:02.087 "bdev_nvme_set_multipath_policy", 00:04:02.087 "bdev_nvme_set_preferred_path", 00:04:02.087 "bdev_nvme_get_io_paths", 00:04:02.087 "bdev_nvme_remove_error_injection", 00:04:02.087 "bdev_nvme_add_error_injection", 00:04:02.087 "bdev_nvme_get_discovery_info", 00:04:02.087 "bdev_nvme_stop_discovery", 00:04:02.087 "bdev_nvme_start_discovery", 00:04:02.087 "bdev_nvme_get_controller_health_info", 00:04:02.087 "bdev_nvme_disable_controller", 00:04:02.087 "bdev_nvme_enable_controller", 00:04:02.087 "bdev_nvme_reset_controller", 00:04:02.087 "bdev_nvme_get_transport_statistics", 00:04:02.087 "bdev_nvme_apply_firmware", 00:04:02.087 "bdev_nvme_detach_controller", 00:04:02.087 "bdev_nvme_get_controllers", 00:04:02.087 "bdev_nvme_attach_controller", 00:04:02.087 "bdev_nvme_set_hotplug", 00:04:02.087 "bdev_nvme_set_options", 00:04:02.087 "bdev_passthru_delete", 00:04:02.087 "bdev_passthru_create", 00:04:02.087 "bdev_lvol_grow_lvstore", 00:04:02.087 "bdev_lvol_get_lvols", 00:04:02.087 "bdev_lvol_get_lvstores", 00:04:02.087 "bdev_lvol_delete", 00:04:02.087 "bdev_lvol_set_read_only", 00:04:02.087 "bdev_lvol_resize", 00:04:02.087 "bdev_lvol_decouple_parent", 00:04:02.087 "bdev_lvol_inflate", 00:04:02.087 "bdev_lvol_rename", 00:04:02.087 "bdev_lvol_clone_bdev", 00:04:02.087 "bdev_lvol_clone", 00:04:02.087 "bdev_lvol_snapshot", 00:04:02.087 "bdev_lvol_create", 00:04:02.087 "bdev_lvol_delete_lvstore", 00:04:02.087 "bdev_lvol_rename_lvstore", 00:04:02.087 "bdev_lvol_create_lvstore", 00:04:02.087 "bdev_raid_set_options", 00:04:02.087 "bdev_raid_remove_base_bdev", 00:04:02.087 "bdev_raid_add_base_bdev", 00:04:02.087 "bdev_raid_delete", 00:04:02.087 "bdev_raid_create", 00:04:02.087 "bdev_raid_get_bdevs", 00:04:02.087 "bdev_error_inject_error", 00:04:02.087 "bdev_error_delete", 00:04:02.087 "bdev_error_create", 00:04:02.087 "bdev_split_delete", 00:04:02.087 "bdev_split_create", 00:04:02.087 "bdev_delay_delete", 00:04:02.087 "bdev_delay_create", 00:04:02.087 "bdev_delay_update_latency", 00:04:02.087 "bdev_zone_block_delete", 00:04:02.087 "bdev_zone_block_create", 00:04:02.087 "blobfs_create", 00:04:02.087 "blobfs_detect", 00:04:02.087 "blobfs_set_cache_size", 00:04:02.087 "bdev_aio_delete", 00:04:02.087 "bdev_aio_rescan", 00:04:02.087 "bdev_aio_create", 00:04:02.087 "bdev_ftl_set_property", 00:04:02.087 "bdev_ftl_get_properties", 00:04:02.087 "bdev_ftl_get_stats", 00:04:02.087 "bdev_ftl_unmap", 00:04:02.087 "bdev_ftl_unload", 00:04:02.087 "bdev_ftl_delete", 00:04:02.087 "bdev_ftl_load", 00:04:02.087 "bdev_ftl_create", 00:04:02.087 "bdev_virtio_attach_controller", 00:04:02.087 "bdev_virtio_scsi_get_devices", 00:04:02.087 "bdev_virtio_detach_controller", 00:04:02.087 "bdev_virtio_blk_set_hotplug", 00:04:02.087 "bdev_iscsi_delete", 00:04:02.087 "bdev_iscsi_create", 00:04:02.087 "bdev_iscsi_set_options", 00:04:02.087 "bdev_uring_delete", 00:04:02.087 "bdev_uring_create", 00:04:02.087 "accel_error_inject_error", 00:04:02.087 "ioat_scan_accel_module", 00:04:02.087 "dsa_scan_accel_module", 00:04:02.087 "iaa_scan_accel_module", 00:04:02.087 "vfu_virtio_create_scsi_endpoint", 00:04:02.087 "vfu_virtio_scsi_remove_target", 00:04:02.087 "vfu_virtio_scsi_add_target", 00:04:02.087 "vfu_virtio_create_blk_endpoint", 00:04:02.087 "vfu_virtio_delete_endpoint", 00:04:02.087 "iscsi_set_options", 00:04:02.087 "iscsi_get_auth_groups", 00:04:02.087 "iscsi_auth_group_remove_secret", 00:04:02.087 "iscsi_auth_group_add_secret", 00:04:02.087 "iscsi_delete_auth_group", 00:04:02.087 "iscsi_create_auth_group", 00:04:02.087 "iscsi_set_discovery_auth", 00:04:02.087 "iscsi_get_options", 00:04:02.087 "iscsi_target_node_request_logout", 00:04:02.087 "iscsi_target_node_set_redirect", 00:04:02.087 "iscsi_target_node_set_auth", 00:04:02.087 "iscsi_target_node_add_lun", 00:04:02.087 "iscsi_get_connections", 00:04:02.087 "iscsi_portal_group_set_auth", 00:04:02.087 "iscsi_start_portal_group", 00:04:02.087 "iscsi_delete_portal_group", 00:04:02.087 "iscsi_create_portal_group", 00:04:02.087 "iscsi_get_portal_groups", 00:04:02.087 "iscsi_delete_target_node", 00:04:02.087 "iscsi_target_node_remove_pg_ig_maps", 00:04:02.087 "iscsi_target_node_add_pg_ig_maps", 00:04:02.087 "iscsi_create_target_node", 00:04:02.087 "iscsi_get_target_nodes", 00:04:02.087 "iscsi_delete_initiator_group", 00:04:02.087 "iscsi_initiator_group_remove_initiators", 00:04:02.087 "iscsi_initiator_group_add_initiators", 00:04:02.087 "iscsi_create_initiator_group", 00:04:02.087 "iscsi_get_initiator_groups", 00:04:02.087 "nvmf_set_crdt", 00:04:02.087 "nvmf_set_config", 00:04:02.088 "nvmf_set_max_subsystems", 00:04:02.088 "nvmf_subsystem_get_listeners", 00:04:02.088 "nvmf_subsystem_get_qpairs", 00:04:02.088 "nvmf_subsystem_get_controllers", 00:04:02.088 "nvmf_get_stats", 00:04:02.088 "nvmf_get_transports", 00:04:02.088 "nvmf_create_transport", 00:04:02.088 "nvmf_get_targets", 00:04:02.088 "nvmf_delete_target", 00:04:02.088 "nvmf_create_target", 00:04:02.088 "nvmf_subsystem_allow_any_host", 00:04:02.088 "nvmf_subsystem_remove_host", 00:04:02.088 "nvmf_subsystem_add_host", 00:04:02.088 "nvmf_subsystem_remove_ns", 00:04:02.088 "nvmf_subsystem_add_ns", 00:04:02.088 "nvmf_subsystem_listener_set_ana_state", 00:04:02.088 "nvmf_discovery_get_referrals", 00:04:02.088 "nvmf_discovery_remove_referral", 00:04:02.088 "nvmf_discovery_add_referral", 00:04:02.088 "nvmf_subsystem_remove_listener", 00:04:02.088 "nvmf_subsystem_add_listener", 00:04:02.088 "nvmf_delete_subsystem", 00:04:02.088 "nvmf_create_subsystem", 00:04:02.088 "nvmf_get_subsystems", 00:04:02.088 "env_dpdk_get_mem_stats", 00:04:02.088 "nbd_get_disks", 00:04:02.088 "nbd_stop_disk", 00:04:02.088 "nbd_start_disk", 00:04:02.088 "ublk_recover_disk", 00:04:02.088 "ublk_get_disks", 00:04:02.088 "ublk_stop_disk", 00:04:02.088 "ublk_start_disk", 00:04:02.088 "ublk_destroy_target", 00:04:02.088 "ublk_create_target", 00:04:02.088 "virtio_blk_create_transport", 00:04:02.088 "virtio_blk_get_transports", 00:04:02.088 "vhost_controller_set_coalescing", 00:04:02.088 "vhost_get_controllers", 00:04:02.088 "vhost_delete_controller", 00:04:02.088 "vhost_create_blk_controller", 00:04:02.088 "vhost_scsi_controller_remove_target", 00:04:02.088 "vhost_scsi_controller_add_target", 00:04:02.088 "vhost_start_scsi_controller", 00:04:02.088 "vhost_create_scsi_controller", 00:04:02.088 "thread_set_cpumask", 00:04:02.088 "framework_get_scheduler", 00:04:02.088 "framework_set_scheduler", 00:04:02.088 "framework_get_reactors", 00:04:02.088 "thread_get_io_channels", 00:04:02.088 "thread_get_pollers", 00:04:02.088 "thread_get_stats", 00:04:02.088 "framework_monitor_context_switch", 00:04:02.088 "spdk_kill_instance", 00:04:02.088 "log_enable_timestamps", 00:04:02.088 "log_get_flags", 00:04:02.088 "log_clear_flag", 00:04:02.088 "log_set_flag", 00:04:02.088 "log_get_level", 00:04:02.088 "log_set_level", 00:04:02.088 "log_get_print_level", 00:04:02.088 "log_set_print_level", 00:04:02.088 "framework_enable_cpumask_locks", 00:04:02.088 "framework_disable_cpumask_locks", 00:04:02.088 "framework_wait_init", 00:04:02.088 "framework_start_init", 00:04:02.088 "scsi_get_devices", 00:04:02.088 "bdev_get_histogram", 00:04:02.088 "bdev_enable_histogram", 00:04:02.088 "bdev_set_qos_limit", 00:04:02.088 "bdev_set_qd_sampling_period", 00:04:02.088 "bdev_get_bdevs", 00:04:02.088 "bdev_reset_iostat", 00:04:02.088 "bdev_get_iostat", 00:04:02.088 "bdev_examine", 00:04:02.088 "bdev_wait_for_examine", 00:04:02.088 "bdev_set_options", 00:04:02.088 "notify_get_notifications", 00:04:02.088 "notify_get_types", 00:04:02.088 "accel_get_stats", 00:04:02.088 "accel_set_options", 00:04:02.088 "accel_set_driver", 00:04:02.088 "accel_crypto_key_destroy", 00:04:02.088 "accel_crypto_keys_get", 00:04:02.088 "accel_crypto_key_create", 00:04:02.088 "accel_assign_opc", 00:04:02.088 "accel_get_module_info", 00:04:02.088 "accel_get_opc_assignments", 00:04:02.088 "vmd_rescan", 00:04:02.088 "vmd_remove_device", 00:04:02.088 "vmd_enable", 00:04:02.088 "sock_set_default_impl", 00:04:02.088 "sock_impl_set_options", 00:04:02.088 "sock_impl_get_options", 00:04:02.088 "iobuf_get_stats", 00:04:02.088 "iobuf_set_options", 00:04:02.088 "framework_get_pci_devices", 00:04:02.088 "framework_get_config", 00:04:02.088 "framework_get_subsystems", 00:04:02.088 "vfu_tgt_set_base_path", 00:04:02.088 "trace_get_info", 00:04:02.088 "trace_get_tpoint_group_mask", 00:04:02.088 "trace_disable_tpoint_group", 00:04:02.088 "trace_enable_tpoint_group", 00:04:02.088 "trace_clear_tpoint_mask", 00:04:02.088 "trace_set_tpoint_mask", 00:04:02.088 "spdk_get_version", 00:04:02.088 "rpc_get_methods" 00:04:02.088 ] 00:04:02.347 18:02:20 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:02.347 18:02:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.347 18:02:20 -- common/autotest_common.sh@10 -- # set +x 00:04:02.347 18:02:20 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:02.347 18:02:20 -- spdkcli/tcp.sh@38 -- # killprocess 54564 00:04:02.347 18:02:20 -- common/autotest_common.sh@936 -- # '[' -z 54564 ']' 00:04:02.348 18:02:20 -- common/autotest_common.sh@940 -- # kill -0 54564 00:04:02.348 18:02:20 -- common/autotest_common.sh@941 -- # uname 00:04:02.348 18:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:02.348 18:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54564 00:04:02.348 killing process with pid 54564 00:04:02.348 18:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:02.348 18:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:02.348 18:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54564' 00:04:02.348 18:02:20 -- common/autotest_common.sh@955 -- # kill 54564 00:04:02.348 18:02:20 -- common/autotest_common.sh@960 -- # wait 54564 00:04:02.607 ************************************ 00:04:02.607 END TEST spdkcli_tcp 00:04:02.607 ************************************ 00:04:02.607 00:04:02.607 real 0m1.913s 00:04:02.607 user 0m3.669s 00:04:02.607 sys 0m0.362s 00:04:02.607 18:02:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.607 18:02:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.607 18:02:21 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.607 18:02:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.607 18:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.607 18:02:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.607 ************************************ 00:04:02.607 START TEST dpdk_mem_utility 00:04:02.607 ************************************ 00:04:02.607 18:02:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.607 * Looking for test storage... 00:04:02.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:02.607 18:02:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.607 18:02:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.607 18:02:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.867 18:02:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.867 18:02:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.867 18:02:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.867 18:02:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.867 18:02:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.867 18:02:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.867 18:02:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.867 18:02:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.867 18:02:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.867 18:02:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.867 18:02:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.867 18:02:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.867 18:02:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.867 18:02:21 -- scripts/common.sh@344 -- # : 1 00:04:02.867 18:02:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.867 18:02:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.867 18:02:21 -- scripts/common.sh@364 -- # decimal 1 00:04:02.867 18:02:21 -- scripts/common.sh@352 -- # local d=1 00:04:02.867 18:02:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.867 18:02:21 -- scripts/common.sh@354 -- # echo 1 00:04:02.867 18:02:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.867 18:02:21 -- scripts/common.sh@365 -- # decimal 2 00:04:02.867 18:02:21 -- scripts/common.sh@352 -- # local d=2 00:04:02.867 18:02:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.867 18:02:21 -- scripts/common.sh@354 -- # echo 2 00:04:02.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.867 18:02:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.867 18:02:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.867 18:02:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.867 18:02:21 -- scripts/common.sh@367 -- # return 0 00:04:02.867 18:02:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.867 18:02:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.867 --rc genhtml_branch_coverage=1 00:04:02.867 --rc genhtml_function_coverage=1 00:04:02.867 --rc genhtml_legend=1 00:04:02.867 --rc geninfo_all_blocks=1 00:04:02.867 --rc geninfo_unexecuted_blocks=1 00:04:02.867 00:04:02.867 ' 00:04:02.867 18:02:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.867 --rc genhtml_branch_coverage=1 00:04:02.867 --rc genhtml_function_coverage=1 00:04:02.867 --rc genhtml_legend=1 00:04:02.867 --rc geninfo_all_blocks=1 00:04:02.867 --rc geninfo_unexecuted_blocks=1 00:04:02.867 00:04:02.867 ' 00:04:02.867 18:02:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.867 --rc genhtml_branch_coverage=1 00:04:02.867 --rc genhtml_function_coverage=1 00:04:02.867 --rc genhtml_legend=1 00:04:02.867 --rc geninfo_all_blocks=1 00:04:02.867 --rc geninfo_unexecuted_blocks=1 00:04:02.867 00:04:02.867 ' 00:04:02.867 18:02:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.867 --rc genhtml_branch_coverage=1 00:04:02.867 --rc genhtml_function_coverage=1 00:04:02.867 --rc genhtml_legend=1 00:04:02.867 --rc geninfo_all_blocks=1 00:04:02.867 --rc geninfo_unexecuted_blocks=1 00:04:02.867 00:04:02.867 ' 00:04:02.867 18:02:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:02.867 18:02:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54662 00:04:02.867 18:02:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54662 00:04:02.867 18:02:21 -- common/autotest_common.sh@829 -- # '[' -z 54662 ']' 00:04:02.867 18:02:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.867 18:02:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.867 18:02:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.867 18:02:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.867 18:02:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.867 18:02:21 -- common/autotest_common.sh@10 -- # set +x 00:04:02.867 [2024-11-18 18:02:21.351459] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:02.867 [2024-11-18 18:02:21.351828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54662 ] 00:04:03.127 [2024-11-18 18:02:21.491176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.127 [2024-11-18 18:02:21.541910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.127 [2024-11-18 18:02:21.542363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.695 18:02:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.695 18:02:22 -- common/autotest_common.sh@862 -- # return 0 00:04:03.695 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.695 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.695 18:02:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:03.695 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.695 { 00:04:03.695 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.695 } 00:04:03.695 18:02:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:03.695 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.955 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:03.955 1 heaps totaling size 814.000000 MiB 00:04:03.955 size: 814.000000 MiB heap id: 0 00:04:03.955 end heaps---------- 00:04:03.955 8 mempools totaling size 598.116089 MiB 00:04:03.955 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.955 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.955 size: 84.521057 MiB name: bdev_io_54662 00:04:03.955 size: 51.011292 MiB name: evtpool_54662 00:04:03.955 size: 50.003479 MiB name: msgpool_54662 00:04:03.955 size: 21.763794 MiB name: PDU_Pool 00:04:03.955 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.955 size: 0.026123 MiB name: Session_Pool 00:04:03.955 end mempools------- 00:04:03.955 6 memzones totaling size 4.142822 MiB 00:04:03.955 size: 1.000366 MiB name: RG_ring_0_54662 00:04:03.955 size: 1.000366 MiB name: RG_ring_1_54662 00:04:03.955 size: 1.000366 MiB name: RG_ring_4_54662 00:04:03.955 size: 1.000366 MiB name: RG_ring_5_54662 00:04:03.955 size: 0.125366 MiB name: RG_ring_2_54662 00:04:03.955 size: 0.015991 MiB name: RG_ring_3_54662 00:04:03.956 end memzones------- 00:04:03.956 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.956 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:04:03.956 list of free elements. size: 12.471008 MiB 00:04:03.956 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:03.956 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:03.956 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:03.956 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:03.956 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:03.956 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:03.956 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:03.956 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:03.956 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:03.956 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:04:03.956 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:03.956 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:03.956 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:03.956 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:03.956 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:03.956 list of standard malloc elements. size: 199.266418 MiB 00:04:03.956 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:03.956 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:03.956 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:03.956 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:03.956 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:03.956 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.956 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:03.956 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.956 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:03.956 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:03.956 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:03.956 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:03.957 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:03.957 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:03.958 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:03.958 list of memzone associated elements. size: 602.262573 MiB 00:04:03.958 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:03.958 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.958 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:03.958 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.958 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:03.958 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54662_0 00:04:03.958 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:03.958 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54662_0 00:04:03.958 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:03.958 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54662_0 00:04:03.958 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:03.958 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.958 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:03.958 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.958 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:03.958 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54662 00:04:03.958 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:03.958 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54662 00:04:03.958 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.958 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54662 00:04:03.958 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:03.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.958 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:03.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.958 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:03.958 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.958 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:03.958 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.958 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:03.958 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54662 00:04:03.958 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:03.958 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54662 00:04:03.958 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:03.958 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54662 00:04:03.958 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:03.958 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54662 00:04:03.958 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:03.958 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54662 00:04:03.958 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:03.958 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.958 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:03.958 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.958 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:03.958 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.958 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:03.958 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54662 00:04:03.958 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:03.958 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.958 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:03.958 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.958 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:03.958 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54662 00:04:03.958 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:03.958 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.958 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:03.958 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54662 00:04:03.958 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:03.958 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54662 00:04:03.958 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:03.958 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.958 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.958 18:02:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54662 00:04:03.958 18:02:22 -- common/autotest_common.sh@936 -- # '[' -z 54662 ']' 00:04:03.958 18:02:22 -- common/autotest_common.sh@940 -- # kill -0 54662 00:04:03.958 18:02:22 -- common/autotest_common.sh@941 -- # uname 00:04:03.958 18:02:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:03.958 18:02:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54662 00:04:03.958 killing process with pid 54662 00:04:03.958 18:02:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:03.958 18:02:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:03.958 18:02:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54662' 00:04:03.958 18:02:22 -- common/autotest_common.sh@955 -- # kill 54662 00:04:03.958 18:02:22 -- common/autotest_common.sh@960 -- # wait 54662 00:04:04.217 00:04:04.217 real 0m1.589s 00:04:04.217 user 0m1.774s 00:04:04.217 sys 0m0.321s 00:04:04.217 18:02:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.217 ************************************ 00:04:04.217 END TEST dpdk_mem_utility 00:04:04.217 ************************************ 00:04:04.217 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.217 18:02:22 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.217 18:02:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.217 18:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.217 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.217 ************************************ 00:04:04.217 START TEST event 00:04:04.217 ************************************ 00:04:04.217 18:02:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.477 * Looking for test storage... 00:04:04.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:04.477 18:02:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:04.477 18:02:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:04.477 18:02:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:04.477 18:02:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:04.477 18:02:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:04.477 18:02:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:04.477 18:02:22 -- scripts/common.sh@335 -- # IFS=.-: 00:04:04.477 18:02:22 -- scripts/common.sh@335 -- # read -ra ver1 00:04:04.477 18:02:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.477 18:02:22 -- scripts/common.sh@336 -- # read -ra ver2 00:04:04.477 18:02:22 -- scripts/common.sh@337 -- # local 'op=<' 00:04:04.477 18:02:22 -- scripts/common.sh@339 -- # ver1_l=2 00:04:04.477 18:02:22 -- scripts/common.sh@340 -- # ver2_l=1 00:04:04.477 18:02:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:04.477 18:02:22 -- scripts/common.sh@343 -- # case "$op" in 00:04:04.477 18:02:22 -- scripts/common.sh@344 -- # : 1 00:04:04.477 18:02:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:04.477 18:02:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.477 18:02:22 -- scripts/common.sh@364 -- # decimal 1 00:04:04.477 18:02:22 -- scripts/common.sh@352 -- # local d=1 00:04:04.477 18:02:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.477 18:02:22 -- scripts/common.sh@354 -- # echo 1 00:04:04.477 18:02:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:04.477 18:02:22 -- scripts/common.sh@365 -- # decimal 2 00:04:04.477 18:02:22 -- scripts/common.sh@352 -- # local d=2 00:04:04.477 18:02:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.477 18:02:22 -- scripts/common.sh@354 -- # echo 2 00:04:04.477 18:02:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:04.477 18:02:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:04.477 18:02:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:04.477 18:02:22 -- scripts/common.sh@367 -- # return 0 00:04:04.477 18:02:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:04.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.477 --rc genhtml_branch_coverage=1 00:04:04.477 --rc genhtml_function_coverage=1 00:04:04.477 --rc genhtml_legend=1 00:04:04.477 --rc geninfo_all_blocks=1 00:04:04.477 --rc geninfo_unexecuted_blocks=1 00:04:04.477 00:04:04.477 ' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:04.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.477 --rc genhtml_branch_coverage=1 00:04:04.477 --rc genhtml_function_coverage=1 00:04:04.477 --rc genhtml_legend=1 00:04:04.477 --rc geninfo_all_blocks=1 00:04:04.477 --rc geninfo_unexecuted_blocks=1 00:04:04.477 00:04:04.477 ' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:04.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.477 --rc genhtml_branch_coverage=1 00:04:04.477 --rc genhtml_function_coverage=1 00:04:04.477 --rc genhtml_legend=1 00:04:04.477 --rc geninfo_all_blocks=1 00:04:04.477 --rc geninfo_unexecuted_blocks=1 00:04:04.477 00:04:04.477 ' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:04.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.477 --rc genhtml_branch_coverage=1 00:04:04.477 --rc genhtml_function_coverage=1 00:04:04.477 --rc genhtml_legend=1 00:04:04.477 --rc geninfo_all_blocks=1 00:04:04.477 --rc geninfo_unexecuted_blocks=1 00:04:04.477 00:04:04.477 ' 00:04:04.477 18:02:22 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:04.477 18:02:22 -- bdev/nbd_common.sh@6 -- # set -e 00:04:04.477 18:02:22 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.477 18:02:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:04.477 18:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.477 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 START TEST event_perf 00:04:04.477 ************************************ 00:04:04.477 18:02:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.477 Running I/O for 1 seconds...[2024-11-18 18:02:22.980447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:04.477 [2024-11-18 18:02:22.980743] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54746 ] 00:04:04.736 [2024-11-18 18:02:23.110865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.736 [2024-11-18 18:02:23.165223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.736 [2024-11-18 18:02:23.165365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.736 [2024-11-18 18:02:23.165430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.736 Running I/O for 1 seconds...[2024-11-18 18:02:23.165430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.674 00:04:05.674 lcore 0: 189694 00:04:05.674 lcore 1: 189694 00:04:05.674 lcore 2: 189694 00:04:05.674 lcore 3: 189693 00:04:05.674 done. 00:04:05.674 00:04:05.674 real 0m1.299s 00:04:05.674 user 0m4.125s 00:04:05.674 sys 0m0.051s 00:04:05.674 ************************************ 00:04:05.674 END TEST event_perf 00:04:05.674 ************************************ 00:04:05.674 18:02:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.674 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.934 18:02:24 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:05.934 18:02:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:05.934 18:02:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.934 18:02:24 -- common/autotest_common.sh@10 -- # set +x 00:04:05.934 ************************************ 00:04:05.934 START TEST event_reactor 00:04:05.934 ************************************ 00:04:05.934 18:02:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:05.934 [2024-11-18 18:02:24.329908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:05.934 [2024-11-18 18:02:24.330016] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54779 ] 00:04:05.934 [2024-11-18 18:02:24.463445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.934 [2024-11-18 18:02:24.524410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.311 test_start 00:04:07.311 oneshot 00:04:07.311 tick 100 00:04:07.311 tick 100 00:04:07.311 tick 250 00:04:07.311 tick 100 00:04:07.311 tick 100 00:04:07.311 tick 250 00:04:07.311 tick 500 00:04:07.311 tick 100 00:04:07.311 tick 100 00:04:07.311 tick 100 00:04:07.311 tick 250 00:04:07.311 tick 100 00:04:07.311 tick 100 00:04:07.311 test_end 00:04:07.311 ************************************ 00:04:07.311 END TEST event_reactor 00:04:07.311 ************************************ 00:04:07.311 00:04:07.311 real 0m1.302s 00:04:07.311 user 0m1.151s 00:04:07.311 sys 0m0.045s 00:04:07.311 18:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.311 18:02:25 -- common/autotest_common.sh@10 -- # set +x 00:04:07.311 18:02:25 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.311 18:02:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:07.311 18:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.311 18:02:25 -- common/autotest_common.sh@10 -- # set +x 00:04:07.311 ************************************ 00:04:07.311 START TEST event_reactor_perf 00:04:07.311 ************************************ 00:04:07.311 18:02:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.311 [2024-11-18 18:02:25.677648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:07.311 [2024-11-18 18:02:25.677965] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54809 ] 00:04:07.311 [2024-11-18 18:02:25.810403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.311 [2024-11-18 18:02:25.868771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.690 test_start 00:04:08.690 test_end 00:04:08.690 Performance: 413407 events per second 00:04:08.690 00:04:08.690 real 0m1.307s 00:04:08.690 user 0m1.165s 00:04:08.690 sys 0m0.036s 00:04:08.690 18:02:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.690 ************************************ 00:04:08.690 END TEST event_reactor_perf 00:04:08.690 18:02:26 -- common/autotest_common.sh@10 -- # set +x 00:04:08.690 ************************************ 00:04:08.690 18:02:27 -- event/event.sh@49 -- # uname -s 00:04:08.690 18:02:27 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:08.690 18:02:27 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:08.690 18:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.690 18:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.690 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:08.690 ************************************ 00:04:08.690 START TEST event_scheduler 00:04:08.690 ************************************ 00:04:08.690 18:02:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:08.690 * Looking for test storage... 00:04:08.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:08.690 18:02:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:08.690 18:02:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:08.690 18:02:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:08.690 18:02:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:08.690 18:02:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:08.690 18:02:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:08.690 18:02:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:08.690 18:02:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:08.690 18:02:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:08.690 18:02:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.691 18:02:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:08.691 18:02:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:08.691 18:02:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:08.691 18:02:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:08.691 18:02:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:08.691 18:02:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:08.691 18:02:27 -- scripts/common.sh@344 -- # : 1 00:04:08.691 18:02:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:08.691 18:02:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.691 18:02:27 -- scripts/common.sh@364 -- # decimal 1 00:04:08.691 18:02:27 -- scripts/common.sh@352 -- # local d=1 00:04:08.691 18:02:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.691 18:02:27 -- scripts/common.sh@354 -- # echo 1 00:04:08.691 18:02:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:08.691 18:02:27 -- scripts/common.sh@365 -- # decimal 2 00:04:08.691 18:02:27 -- scripts/common.sh@352 -- # local d=2 00:04:08.691 18:02:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.691 18:02:27 -- scripts/common.sh@354 -- # echo 2 00:04:08.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.691 18:02:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:08.691 18:02:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:08.691 18:02:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:08.691 18:02:27 -- scripts/common.sh@367 -- # return 0 00:04:08.691 18:02:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.691 18:02:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:08.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.691 --rc genhtml_branch_coverage=1 00:04:08.691 --rc genhtml_function_coverage=1 00:04:08.691 --rc genhtml_legend=1 00:04:08.691 --rc geninfo_all_blocks=1 00:04:08.691 --rc geninfo_unexecuted_blocks=1 00:04:08.691 00:04:08.691 ' 00:04:08.691 18:02:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:08.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.691 --rc genhtml_branch_coverage=1 00:04:08.691 --rc genhtml_function_coverage=1 00:04:08.691 --rc genhtml_legend=1 00:04:08.691 --rc geninfo_all_blocks=1 00:04:08.691 --rc geninfo_unexecuted_blocks=1 00:04:08.691 00:04:08.691 ' 00:04:08.691 18:02:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:08.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.691 --rc genhtml_branch_coverage=1 00:04:08.691 --rc genhtml_function_coverage=1 00:04:08.691 --rc genhtml_legend=1 00:04:08.691 --rc geninfo_all_blocks=1 00:04:08.691 --rc geninfo_unexecuted_blocks=1 00:04:08.691 00:04:08.691 ' 00:04:08.691 18:02:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:08.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.691 --rc genhtml_branch_coverage=1 00:04:08.691 --rc genhtml_function_coverage=1 00:04:08.691 --rc genhtml_legend=1 00:04:08.691 --rc geninfo_all_blocks=1 00:04:08.691 --rc geninfo_unexecuted_blocks=1 00:04:08.691 00:04:08.691 ' 00:04:08.691 18:02:27 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:08.691 18:02:27 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54883 00:04:08.691 18:02:27 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:08.691 18:02:27 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.691 18:02:27 -- scheduler/scheduler.sh@37 -- # waitforlisten 54883 00:04:08.691 18:02:27 -- common/autotest_common.sh@829 -- # '[' -z 54883 ']' 00:04:08.691 18:02:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.691 18:02:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.691 18:02:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.691 18:02:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.691 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:08.691 [2024-11-18 18:02:27.248438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:08.691 [2024-11-18 18:02:27.248844] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54883 ] 00:04:08.950 [2024-11-18 18:02:27.390213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:08.950 [2024-11-18 18:02:27.460116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.950 [2024-11-18 18:02:27.463565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.950 [2024-11-18 18:02:27.463731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.950 [2024-11-18 18:02:27.463737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:08.950 18:02:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.950 18:02:27 -- common/autotest_common.sh@862 -- # return 0 00:04:08.950 18:02:27 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:08.950 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.950 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:08.950 POWER: Env isn't set yet! 00:04:08.950 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:08.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:08.950 POWER: Cannot set governor of lcore 0 to userspace 00:04:08.950 POWER: Attempting to initialise PSTAT power management... 00:04:08.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:08.950 POWER: Cannot set governor of lcore 0 to performance 00:04:08.950 POWER: Attempting to initialise AMD PSTATE power management... 00:04:08.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:08.950 POWER: Cannot set governor of lcore 0 to userspace 00:04:08.950 POWER: Attempting to initialise CPPC power management... 00:04:08.951 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:08.951 POWER: Cannot set governor of lcore 0 to userspace 00:04:08.951 POWER: Attempting to initialise VM power management... 00:04:08.951 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:08.951 POWER: Unable to set Power Management Environment for lcore 0 00:04:08.951 [2024-11-18 18:02:27.519804] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:08.951 [2024-11-18 18:02:27.519941] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:08.951 [2024-11-18 18:02:27.520064] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:08.951 [2024-11-18 18:02:27.520233] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:08.951 [2024-11-18 18:02:27.520351] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:08.951 [2024-11-18 18:02:27.520408] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:08.951 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.951 18:02:27 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:08.951 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.951 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 [2024-11-18 18:02:27.574935] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:09.210 18:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.210 18:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 ************************************ 00:04:09.210 START TEST scheduler_create_thread 00:04:09.210 ************************************ 00:04:09.210 18:02:27 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 2 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 3 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 4 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 5 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 6 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 7 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 8 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 9 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 10 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.210 18:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.210 18:02:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:09.210 18:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.210 18:02:27 -- common/autotest_common.sh@10 -- # set +x 00:04:09.778 18:02:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.778 18:02:28 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:09.778 18:02:28 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:09.778 18:02:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.778 18:02:28 -- common/autotest_common.sh@10 -- # set +x 00:04:11.156 ************************************ 00:04:11.156 END TEST scheduler_create_thread 00:04:11.156 ************************************ 00:04:11.156 18:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.157 00:04:11.157 real 0m1.755s 00:04:11.157 user 0m0.013s 00:04:11.157 sys 0m0.004s 00:04:11.157 18:02:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.157 18:02:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.157 18:02:29 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.157 18:02:29 -- scheduler/scheduler.sh@46 -- # killprocess 54883 00:04:11.157 18:02:29 -- common/autotest_common.sh@936 -- # '[' -z 54883 ']' 00:04:11.157 18:02:29 -- common/autotest_common.sh@940 -- # kill -0 54883 00:04:11.157 18:02:29 -- common/autotest_common.sh@941 -- # uname 00:04:11.157 18:02:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:11.157 18:02:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54883 00:04:11.157 18:02:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:11.157 killing process with pid 54883 00:04:11.157 18:02:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:11.157 18:02:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54883' 00:04:11.157 18:02:29 -- common/autotest_common.sh@955 -- # kill 54883 00:04:11.157 18:02:29 -- common/autotest_common.sh@960 -- # wait 54883 00:04:11.415 [2024-11-18 18:02:29.816874] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:11.675 00:04:11.675 real 0m2.996s 00:04:11.675 user 0m3.741s 00:04:11.675 sys 0m0.295s 00:04:11.675 18:02:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.675 ************************************ 00:04:11.675 18:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:11.675 END TEST event_scheduler 00:04:11.675 ************************************ 00:04:11.675 18:02:30 -- event/event.sh@51 -- # modprobe -n nbd 00:04:11.675 18:02:30 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:11.675 18:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.675 18:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.675 18:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:11.675 ************************************ 00:04:11.675 START TEST app_repeat 00:04:11.675 ************************************ 00:04:11.675 18:02:30 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:11.675 18:02:30 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.675 18:02:30 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.675 18:02:30 -- event/event.sh@13 -- # local nbd_list 00:04:11.675 18:02:30 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.675 18:02:30 -- event/event.sh@14 -- # local bdev_list 00:04:11.675 18:02:30 -- event/event.sh@15 -- # local repeat_times=4 00:04:11.675 18:02:30 -- event/event.sh@17 -- # modprobe nbd 00:04:11.675 Process app_repeat pid: 54964 00:04:11.675 spdk_app_start Round 0 00:04:11.675 18:02:30 -- event/event.sh@19 -- # repeat_pid=54964 00:04:11.675 18:02:30 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.675 18:02:30 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54964' 00:04:11.675 18:02:30 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:11.675 18:02:30 -- event/event.sh@23 -- # for i in {0..2} 00:04:11.675 18:02:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:11.675 18:02:30 -- event/event.sh@25 -- # waitforlisten 54964 /var/tmp/spdk-nbd.sock 00:04:11.675 18:02:30 -- common/autotest_common.sh@829 -- # '[' -z 54964 ']' 00:04:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.675 18:02:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.675 18:02:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.675 18:02:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.675 18:02:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.675 18:02:30 -- common/autotest_common.sh@10 -- # set +x 00:04:11.675 [2024-11-18 18:02:30.097449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:11.675 [2024-11-18 18:02:30.097839] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54964 ] 00:04:11.675 [2024-11-18 18:02:30.231480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.971 [2024-11-18 18:02:30.285621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.971 [2024-11-18 18:02:30.285625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.971 18:02:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.971 18:02:30 -- common/autotest_common.sh@862 -- # return 0 00:04:11.971 18:02:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.229 Malloc0 00:04:12.229 18:02:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.488 Malloc1 00:04:12.488 18:02:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@12 -- # local i 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.488 18:02:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.489 18:02:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.747 /dev/nbd0 00:04:12.748 18:02:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.748 18:02:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.748 18:02:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:12.748 18:02:31 -- common/autotest_common.sh@867 -- # local i 00:04:12.748 18:02:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:12.748 18:02:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:12.748 18:02:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:12.748 18:02:31 -- common/autotest_common.sh@871 -- # break 00:04:12.748 18:02:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:12.748 18:02:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:12.748 18:02:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.748 1+0 records in 00:04:12.748 1+0 records out 00:04:12.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269201 s, 15.2 MB/s 00:04:12.748 18:02:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:12.748 18:02:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:12.748 18:02:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:12.748 18:02:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:12.748 18:02:31 -- common/autotest_common.sh@887 -- # return 0 00:04:12.748 18:02:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.748 18:02:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.748 18:02:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.007 /dev/nbd1 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.007 18:02:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:13.007 18:02:31 -- common/autotest_common.sh@867 -- # local i 00:04:13.007 18:02:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:13.007 18:02:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:13.007 18:02:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:13.007 18:02:31 -- common/autotest_common.sh@871 -- # break 00:04:13.007 18:02:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:13.007 18:02:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:13.007 18:02:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.007 1+0 records in 00:04:13.007 1+0 records out 00:04:13.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651355 s, 6.3 MB/s 00:04:13.007 18:02:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:13.007 18:02:31 -- common/autotest_common.sh@884 -- # size=4096 00:04:13.007 18:02:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:13.007 18:02:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:13.007 18:02:31 -- common/autotest_common.sh@887 -- # return 0 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.007 18:02:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.266 { 00:04:13.266 "nbd_device": "/dev/nbd0", 00:04:13.266 "bdev_name": "Malloc0" 00:04:13.266 }, 00:04:13.266 { 00:04:13.266 "nbd_device": "/dev/nbd1", 00:04:13.266 "bdev_name": "Malloc1" 00:04:13.266 } 00:04:13.266 ]' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.266 { 00:04:13.266 "nbd_device": "/dev/nbd0", 00:04:13.266 "bdev_name": "Malloc0" 00:04:13.266 }, 00:04:13.266 { 00:04:13.266 "nbd_device": "/dev/nbd1", 00:04:13.266 "bdev_name": "Malloc1" 00:04:13.266 } 00:04:13.266 ]' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.266 /dev/nbd1' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.266 /dev/nbd1' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.266 256+0 records in 00:04:13.266 256+0 records out 00:04:13.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101055 s, 104 MB/s 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.266 256+0 records in 00:04:13.266 256+0 records out 00:04:13.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025869 s, 40.5 MB/s 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.266 256+0 records in 00:04:13.266 256+0 records out 00:04:13.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027448 s, 38.2 MB/s 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.266 18:02:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@51 -- # local i 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.525 18:02:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.784 18:02:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.785 18:02:32 -- bdev/nbd_common.sh@41 -- # break 00:04:13.785 18:02:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.785 18:02:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.785 18:02:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@41 -- # break 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.043 18:02:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@65 -- # true 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.302 18:02:32 -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.302 18:02:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.561 18:02:33 -- event/event.sh@35 -- # sleep 3 00:04:14.821 [2024-11-18 18:02:33.202481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.821 [2024-11-18 18:02:33.248270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.821 [2024-11-18 18:02:33.248280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.821 [2024-11-18 18:02:33.275934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.821 [2024-11-18 18:02:33.275989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.100 spdk_app_start Round 1 00:04:18.100 18:02:36 -- event/event.sh@23 -- # for i in {0..2} 00:04:18.100 18:02:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:18.100 18:02:36 -- event/event.sh@25 -- # waitforlisten 54964 /var/tmp/spdk-nbd.sock 00:04:18.100 18:02:36 -- common/autotest_common.sh@829 -- # '[' -z 54964 ']' 00:04:18.100 18:02:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.100 18:02:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.100 18:02:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.100 18:02:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.100 18:02:36 -- common/autotest_common.sh@10 -- # set +x 00:04:18.100 18:02:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.100 18:02:36 -- common/autotest_common.sh@862 -- # return 0 00:04:18.100 18:02:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.100 Malloc0 00:04:18.100 18:02:36 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.358 Malloc1 00:04:18.358 18:02:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@12 -- # local i 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.358 18:02:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.616 /dev/nbd0 00:04:18.616 18:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.616 18:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.616 18:02:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:18.616 18:02:37 -- common/autotest_common.sh@867 -- # local i 00:04:18.616 18:02:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:18.616 18:02:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:18.616 18:02:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:18.616 18:02:37 -- common/autotest_common.sh@871 -- # break 00:04:18.616 18:02:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:18.616 18:02:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:18.616 18:02:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.616 1+0 records in 00:04:18.616 1+0 records out 00:04:18.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378211 s, 10.8 MB/s 00:04:18.616 18:02:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:18.616 18:02:37 -- common/autotest_common.sh@884 -- # size=4096 00:04:18.616 18:02:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:18.616 18:02:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:18.875 18:02:37 -- common/autotest_common.sh@887 -- # return 0 00:04:18.875 18:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.875 18:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.875 18:02:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.875 /dev/nbd1 00:04:18.875 18:02:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.875 18:02:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.875 18:02:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:18.875 18:02:37 -- common/autotest_common.sh@867 -- # local i 00:04:18.875 18:02:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:18.875 18:02:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:18.875 18:02:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:18.875 18:02:37 -- common/autotest_common.sh@871 -- # break 00:04:18.875 18:02:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:18.875 18:02:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:18.875 18:02:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.875 1+0 records in 00:04:18.875 1+0 records out 00:04:18.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627953 s, 6.5 MB/s 00:04:18.875 18:02:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:18.875 18:02:37 -- common/autotest_common.sh@884 -- # size=4096 00:04:18.875 18:02:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:19.138 18:02:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:19.138 18:02:37 -- common/autotest_common.sh@887 -- # return 0 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.138 { 00:04:19.138 "nbd_device": "/dev/nbd0", 00:04:19.138 "bdev_name": "Malloc0" 00:04:19.138 }, 00:04:19.138 { 00:04:19.138 "nbd_device": "/dev/nbd1", 00:04:19.138 "bdev_name": "Malloc1" 00:04:19.138 } 00:04:19.138 ]' 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.138 { 00:04:19.138 "nbd_device": "/dev/nbd0", 00:04:19.138 "bdev_name": "Malloc0" 00:04:19.138 }, 00:04:19.138 { 00:04:19.138 "nbd_device": "/dev/nbd1", 00:04:19.138 "bdev_name": "Malloc1" 00:04:19.138 } 00:04:19.138 ]' 00:04:19.138 18:02:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.420 /dev/nbd1' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.420 /dev/nbd1' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.420 256+0 records in 00:04:19.420 256+0 records out 00:04:19.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00945694 s, 111 MB/s 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.420 256+0 records in 00:04:19.420 256+0 records out 00:04:19.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248282 s, 42.2 MB/s 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.420 256+0 records in 00:04:19.420 256+0 records out 00:04:19.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261884 s, 40.0 MB/s 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@51 -- # local i 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.420 18:02:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.679 18:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.680 18:02:38 -- bdev/nbd_common.sh@41 -- # break 00:04:19.680 18:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.680 18:02:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.680 18:02:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@41 -- # break 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.939 18:02:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.197 18:02:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@65 -- # true 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.198 18:02:38 -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.198 18:02:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.765 18:02:39 -- event/event.sh@35 -- # sleep 3 00:04:20.765 [2024-11-18 18:02:39.231953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.765 [2024-11-18 18:02:39.290714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.765 [2024-11-18 18:02:39.290723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.765 [2024-11-18 18:02:39.319475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.765 [2024-11-18 18:02:39.319566] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.048 spdk_app_start Round 2 00:04:24.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.048 18:02:42 -- event/event.sh@23 -- # for i in {0..2} 00:04:24.048 18:02:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:24.048 18:02:42 -- event/event.sh@25 -- # waitforlisten 54964 /var/tmp/spdk-nbd.sock 00:04:24.048 18:02:42 -- common/autotest_common.sh@829 -- # '[' -z 54964 ']' 00:04:24.048 18:02:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.048 18:02:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.048 18:02:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.048 18:02:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.048 18:02:42 -- common/autotest_common.sh@10 -- # set +x 00:04:24.048 18:02:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.049 18:02:42 -- common/autotest_common.sh@862 -- # return 0 00:04:24.049 18:02:42 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.049 Malloc0 00:04:24.049 18:02:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.307 Malloc1 00:04:24.307 18:02:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@12 -- # local i 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.307 18:02:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:24.565 /dev/nbd0 00:04:24.565 18:02:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.565 18:02:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.565 18:02:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:24.565 18:02:43 -- common/autotest_common.sh@867 -- # local i 00:04:24.565 18:02:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.565 18:02:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.565 18:02:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:24.565 18:02:43 -- common/autotest_common.sh@871 -- # break 00:04:24.565 18:02:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.565 18:02:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.565 18:02:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.565 1+0 records in 00:04:24.565 1+0 records out 00:04:24.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474574 s, 8.6 MB/s 00:04:24.565 18:02:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.565 18:02:43 -- common/autotest_common.sh@884 -- # size=4096 00:04:24.565 18:02:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.565 18:02:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.565 18:02:43 -- common/autotest_common.sh@887 -- # return 0 00:04:24.565 18:02:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.565 18:02:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.565 18:02:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.824 /dev/nbd1 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.824 18:02:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:24.824 18:02:43 -- common/autotest_common.sh@867 -- # local i 00:04:24.824 18:02:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:24.824 18:02:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:24.824 18:02:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:24.824 18:02:43 -- common/autotest_common.sh@871 -- # break 00:04:24.824 18:02:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:24.824 18:02:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:24.824 18:02:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.824 1+0 records in 00:04:24.824 1+0 records out 00:04:24.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023341 s, 17.5 MB/s 00:04:24.824 18:02:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.824 18:02:43 -- common/autotest_common.sh@884 -- # size=4096 00:04:24.824 18:02:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:24.824 18:02:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:24.824 18:02:43 -- common/autotest_common.sh@887 -- # return 0 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.824 18:02:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.825 18:02:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.083 18:02:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.083 { 00:04:25.083 "nbd_device": "/dev/nbd0", 00:04:25.083 "bdev_name": "Malloc0" 00:04:25.083 }, 00:04:25.083 { 00:04:25.083 "nbd_device": "/dev/nbd1", 00:04:25.083 "bdev_name": "Malloc1" 00:04:25.083 } 00:04:25.083 ]' 00:04:25.083 18:02:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.083 18:02:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.083 { 00:04:25.083 "nbd_device": "/dev/nbd0", 00:04:25.083 "bdev_name": "Malloc0" 00:04:25.083 }, 00:04:25.083 { 00:04:25.083 "nbd_device": "/dev/nbd1", 00:04:25.083 "bdev_name": "Malloc1" 00:04:25.083 } 00:04:25.083 ]' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.341 /dev/nbd1' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.341 /dev/nbd1' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.341 256+0 records in 00:04:25.341 256+0 records out 00:04:25.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107368 s, 97.7 MB/s 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.341 256+0 records in 00:04:25.341 256+0 records out 00:04:25.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235379 s, 44.5 MB/s 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.341 256+0 records in 00:04:25.341 256+0 records out 00:04:25.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254208 s, 41.2 MB/s 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@51 -- # local i 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.341 18:02:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@41 -- # break 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.600 18:02:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@41 -- # break 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.858 18:02:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.117 18:02:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.117 18:02:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.117 18:02:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@65 -- # true 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.375 18:02:44 -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.375 18:02:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.634 18:02:45 -- event/event.sh@35 -- # sleep 3 00:04:26.634 [2024-11-18 18:02:45.149369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.634 [2024-11-18 18:02:45.197261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.634 [2024-11-18 18:02:45.197270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.634 [2024-11-18 18:02:45.226408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.634 [2024-11-18 18:02:45.226463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:29.918 18:02:48 -- event/event.sh@38 -- # waitforlisten 54964 /var/tmp/spdk-nbd.sock 00:04:29.918 18:02:48 -- common/autotest_common.sh@829 -- # '[' -z 54964 ']' 00:04:29.918 18:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:29.918 18:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.918 18:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:29.918 18:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.918 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:29.918 18:02:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.919 18:02:48 -- common/autotest_common.sh@862 -- # return 0 00:04:29.919 18:02:48 -- event/event.sh@39 -- # killprocess 54964 00:04:29.919 18:02:48 -- common/autotest_common.sh@936 -- # '[' -z 54964 ']' 00:04:29.919 18:02:48 -- common/autotest_common.sh@940 -- # kill -0 54964 00:04:29.919 18:02:48 -- common/autotest_common.sh@941 -- # uname 00:04:29.919 18:02:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.919 18:02:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54964 00:04:29.919 killing process with pid 54964 00:04:29.919 18:02:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.919 18:02:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.919 18:02:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54964' 00:04:29.919 18:02:48 -- common/autotest_common.sh@955 -- # kill 54964 00:04:29.919 18:02:48 -- common/autotest_common.sh@960 -- # wait 54964 00:04:29.919 spdk_app_start is called in Round 0. 00:04:29.919 Shutdown signal received, stop current app iteration 00:04:29.919 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:29.919 spdk_app_start is called in Round 1. 00:04:29.919 Shutdown signal received, stop current app iteration 00:04:29.919 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:29.919 spdk_app_start is called in Round 2. 00:04:29.919 Shutdown signal received, stop current app iteration 00:04:29.919 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:04:29.919 spdk_app_start is called in Round 3. 00:04:29.919 Shutdown signal received, stop current app iteration 00:04:29.919 18:02:48 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:29.919 18:02:48 -- event/event.sh@42 -- # return 0 00:04:29.919 00:04:29.919 real 0m18.398s 00:04:29.919 user 0m41.972s 00:04:29.919 sys 0m2.473s 00:04:29.919 18:02:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.919 ************************************ 00:04:29.919 END TEST app_repeat 00:04:29.919 ************************************ 00:04:29.919 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:29.919 18:02:48 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:29.919 18:02:48 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:29.919 18:02:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.919 18:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.919 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.178 ************************************ 00:04:30.178 START TEST cpu_locks 00:04:30.178 ************************************ 00:04:30.178 18:02:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:30.178 * Looking for test storage... 00:04:30.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:30.178 18:02:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.178 18:02:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.178 18:02:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.178 18:02:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.178 18:02:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.178 18:02:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.178 18:02:48 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.178 18:02:48 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.178 18:02:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.178 18:02:48 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.178 18:02:48 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.178 18:02:48 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.178 18:02:48 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.178 18:02:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.178 18:02:48 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.178 18:02:48 -- scripts/common.sh@344 -- # : 1 00:04:30.178 18:02:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.178 18:02:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.178 18:02:48 -- scripts/common.sh@364 -- # decimal 1 00:04:30.178 18:02:48 -- scripts/common.sh@352 -- # local d=1 00:04:30.178 18:02:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.178 18:02:48 -- scripts/common.sh@354 -- # echo 1 00:04:30.178 18:02:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.178 18:02:48 -- scripts/common.sh@365 -- # decimal 2 00:04:30.178 18:02:48 -- scripts/common.sh@352 -- # local d=2 00:04:30.178 18:02:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.178 18:02:48 -- scripts/common.sh@354 -- # echo 2 00:04:30.178 18:02:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.178 18:02:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.178 18:02:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.178 18:02:48 -- scripts/common.sh@367 -- # return 0 00:04:30.178 18:02:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.178 --rc genhtml_branch_coverage=1 00:04:30.178 --rc genhtml_function_coverage=1 00:04:30.178 --rc genhtml_legend=1 00:04:30.178 --rc geninfo_all_blocks=1 00:04:30.178 --rc geninfo_unexecuted_blocks=1 00:04:30.178 00:04:30.178 ' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.178 --rc genhtml_branch_coverage=1 00:04:30.178 --rc genhtml_function_coverage=1 00:04:30.178 --rc genhtml_legend=1 00:04:30.178 --rc geninfo_all_blocks=1 00:04:30.178 --rc geninfo_unexecuted_blocks=1 00:04:30.178 00:04:30.178 ' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.178 --rc genhtml_branch_coverage=1 00:04:30.178 --rc genhtml_function_coverage=1 00:04:30.178 --rc genhtml_legend=1 00:04:30.178 --rc geninfo_all_blocks=1 00:04:30.178 --rc geninfo_unexecuted_blocks=1 00:04:30.178 00:04:30.178 ' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.178 --rc genhtml_branch_coverage=1 00:04:30.178 --rc genhtml_function_coverage=1 00:04:30.178 --rc genhtml_legend=1 00:04:30.178 --rc geninfo_all_blocks=1 00:04:30.178 --rc geninfo_unexecuted_blocks=1 00:04:30.178 00:04:30.178 ' 00:04:30.178 18:02:48 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:30.178 18:02:48 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:30.178 18:02:48 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:30.178 18:02:48 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:30.178 18:02:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.178 18:02:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.178 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.178 ************************************ 00:04:30.178 START TEST default_locks 00:04:30.178 ************************************ 00:04:30.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.178 18:02:48 -- common/autotest_common.sh@1114 -- # default_locks 00:04:30.178 18:02:48 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55396 00:04:30.178 18:02:48 -- event/cpu_locks.sh@47 -- # waitforlisten 55396 00:04:30.178 18:02:48 -- common/autotest_common.sh@829 -- # '[' -z 55396 ']' 00:04:30.178 18:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.178 18:02:48 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.178 18:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.178 18:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.178 18:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.178 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.178 [2024-11-18 18:02:48.762925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.178 [2024-11-18 18:02:48.762996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55396 ] 00:04:30.437 [2024-11-18 18:02:48.895589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.437 [2024-11-18 18:02:48.948062] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.437 [2024-11-18 18:02:48.948453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.374 18:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.374 18:02:49 -- common/autotest_common.sh@862 -- # return 0 00:04:31.374 18:02:49 -- event/cpu_locks.sh@49 -- # locks_exist 55396 00:04:31.374 18:02:49 -- event/cpu_locks.sh@22 -- # lslocks -p 55396 00:04:31.374 18:02:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.633 18:02:50 -- event/cpu_locks.sh@50 -- # killprocess 55396 00:04:31.633 18:02:50 -- common/autotest_common.sh@936 -- # '[' -z 55396 ']' 00:04:31.633 18:02:50 -- common/autotest_common.sh@940 -- # kill -0 55396 00:04:31.633 18:02:50 -- common/autotest_common.sh@941 -- # uname 00:04:31.633 18:02:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:31.633 18:02:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55396 00:04:31.633 18:02:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:31.633 18:02:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:31.633 killing process with pid 55396 00:04:31.633 18:02:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55396' 00:04:31.633 18:02:50 -- common/autotest_common.sh@955 -- # kill 55396 00:04:31.633 18:02:50 -- common/autotest_common.sh@960 -- # wait 55396 00:04:31.893 18:02:50 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55396 00:04:31.893 18:02:50 -- common/autotest_common.sh@650 -- # local es=0 00:04:31.893 18:02:50 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55396 00:04:31.893 18:02:50 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:31.893 18:02:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.893 18:02:50 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:31.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.893 ERROR: process (pid: 55396) is no longer running 00:04:31.893 18:02:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.893 18:02:50 -- common/autotest_common.sh@653 -- # waitforlisten 55396 00:04:31.893 18:02:50 -- common/autotest_common.sh@829 -- # '[' -z 55396 ']' 00:04:31.893 18:02:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.893 18:02:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.893 18:02:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.893 18:02:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.893 18:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55396) - No such process 00:04:31.893 18:02:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.893 18:02:50 -- common/autotest_common.sh@862 -- # return 1 00:04:31.893 18:02:50 -- common/autotest_common.sh@653 -- # es=1 00:04:31.893 18:02:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.893 18:02:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.893 18:02:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.893 18:02:50 -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.893 18:02:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.893 18:02:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.893 18:02:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.893 00:04:31.893 real 0m1.762s 00:04:31.893 user 0m1.997s 00:04:31.893 sys 0m0.441s 00:04:31.893 18:02:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.893 18:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.893 ************************************ 00:04:31.893 END TEST default_locks 00:04:31.893 ************************************ 00:04:32.152 18:02:50 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:32.152 18:02:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.152 18:02:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.152 18:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:32.152 ************************************ 00:04:32.152 START TEST default_locks_via_rpc 00:04:32.152 ************************************ 00:04:32.152 18:02:50 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:04:32.152 18:02:50 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55443 00:04:32.152 18:02:50 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.152 18:02:50 -- event/cpu_locks.sh@63 -- # waitforlisten 55443 00:04:32.152 18:02:50 -- common/autotest_common.sh@829 -- # '[' -z 55443 ']' 00:04:32.152 18:02:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.152 18:02:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.152 18:02:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.152 18:02:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.152 18:02:50 -- common/autotest_common.sh@10 -- # set +x 00:04:32.152 [2024-11-18 18:02:50.586928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:32.152 [2024-11-18 18:02:50.587210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55443 ] 00:04:32.152 [2024-11-18 18:02:50.726505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.411 [2024-11-18 18:02:50.783300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.411 [2024-11-18 18:02:50.783781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.347 18:02:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.347 18:02:51 -- common/autotest_common.sh@862 -- # return 0 00:04:33.347 18:02:51 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:33.347 18:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.347 18:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:33.347 18:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.347 18:02:51 -- event/cpu_locks.sh@67 -- # no_locks 00:04:33.347 18:02:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.347 18:02:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.347 18:02:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.347 18:02:51 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.347 18:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.347 18:02:51 -- common/autotest_common.sh@10 -- # set +x 00:04:33.347 18:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.347 18:02:51 -- event/cpu_locks.sh@71 -- # locks_exist 55443 00:04:33.347 18:02:51 -- event/cpu_locks.sh@22 -- # lslocks -p 55443 00:04:33.347 18:02:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.347 18:02:51 -- event/cpu_locks.sh@73 -- # killprocess 55443 00:04:33.347 18:02:51 -- common/autotest_common.sh@936 -- # '[' -z 55443 ']' 00:04:33.347 18:02:51 -- common/autotest_common.sh@940 -- # kill -0 55443 00:04:33.347 18:02:51 -- common/autotest_common.sh@941 -- # uname 00:04:33.347 18:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.347 18:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55443 00:04:33.606 killing process with pid 55443 00:04:33.606 18:02:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.606 18:02:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.606 18:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55443' 00:04:33.606 18:02:51 -- common/autotest_common.sh@955 -- # kill 55443 00:04:33.606 18:02:51 -- common/autotest_common.sh@960 -- # wait 55443 00:04:33.866 ************************************ 00:04:33.866 END TEST default_locks_via_rpc 00:04:33.866 ************************************ 00:04:33.866 00:04:33.866 real 0m1.697s 00:04:33.866 user 0m2.004s 00:04:33.866 sys 0m0.377s 00:04:33.866 18:02:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.866 18:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.866 18:02:52 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:33.866 18:02:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.866 18:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.866 18:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.866 ************************************ 00:04:33.866 START TEST non_locking_app_on_locked_coremask 00:04:33.866 ************************************ 00:04:33.866 18:02:52 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:04:33.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.866 18:02:52 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55494 00:04:33.866 18:02:52 -- event/cpu_locks.sh@81 -- # waitforlisten 55494 /var/tmp/spdk.sock 00:04:33.866 18:02:52 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.866 18:02:52 -- common/autotest_common.sh@829 -- # '[' -z 55494 ']' 00:04:33.866 18:02:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.866 18:02:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.866 18:02:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.866 18:02:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.866 18:02:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.866 [2024-11-18 18:02:52.335727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:33.866 [2024-11-18 18:02:52.336468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55494 ] 00:04:34.124 [2024-11-18 18:02:52.475016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.124 [2024-11-18 18:02:52.527717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:34.124 [2024-11-18 18:02:52.528155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.060 18:02:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.060 18:02:53 -- common/autotest_common.sh@862 -- # return 0 00:04:35.060 18:02:53 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:35.060 18:02:53 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55510 00:04:35.060 18:02:53 -- event/cpu_locks.sh@85 -- # waitforlisten 55510 /var/tmp/spdk2.sock 00:04:35.060 18:02:53 -- common/autotest_common.sh@829 -- # '[' -z 55510 ']' 00:04:35.060 18:02:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.060 18:02:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.060 18:02:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.060 18:02:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.060 18:02:53 -- common/autotest_common.sh@10 -- # set +x 00:04:35.060 [2024-11-18 18:02:53.379891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.060 [2024-11-18 18:02:53.379986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55510 ] 00:04:35.061 [2024-11-18 18:02:53.516599] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.061 [2024-11-18 18:02:53.516651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.061 [2024-11-18 18:02:53.617133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.061 [2024-11-18 18:02:53.617303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.998 18:02:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.998 18:02:54 -- common/autotest_common.sh@862 -- # return 0 00:04:35.998 18:02:54 -- event/cpu_locks.sh@87 -- # locks_exist 55494 00:04:35.998 18:02:54 -- event/cpu_locks.sh@22 -- # lslocks -p 55494 00:04:35.998 18:02:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.566 18:02:55 -- event/cpu_locks.sh@89 -- # killprocess 55494 00:04:36.566 18:02:55 -- common/autotest_common.sh@936 -- # '[' -z 55494 ']' 00:04:36.566 18:02:55 -- common/autotest_common.sh@940 -- # kill -0 55494 00:04:36.566 18:02:55 -- common/autotest_common.sh@941 -- # uname 00:04:36.566 18:02:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.566 18:02:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55494 00:04:36.825 killing process with pid 55494 00:04:36.825 18:02:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.825 18:02:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.825 18:02:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55494' 00:04:36.825 18:02:55 -- common/autotest_common.sh@955 -- # kill 55494 00:04:36.825 18:02:55 -- common/autotest_common.sh@960 -- # wait 55494 00:04:37.085 18:02:55 -- event/cpu_locks.sh@90 -- # killprocess 55510 00:04:37.085 18:02:55 -- common/autotest_common.sh@936 -- # '[' -z 55510 ']' 00:04:37.085 18:02:55 -- common/autotest_common.sh@940 -- # kill -0 55510 00:04:37.085 18:02:55 -- common/autotest_common.sh@941 -- # uname 00:04:37.344 18:02:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.344 18:02:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55510 00:04:37.344 killing process with pid 55510 00:04:37.344 18:02:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.344 18:02:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.344 18:02:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55510' 00:04:37.344 18:02:55 -- common/autotest_common.sh@955 -- # kill 55510 00:04:37.344 18:02:55 -- common/autotest_common.sh@960 -- # wait 55510 00:04:37.623 ************************************ 00:04:37.623 END TEST non_locking_app_on_locked_coremask 00:04:37.623 ************************************ 00:04:37.623 00:04:37.623 real 0m3.711s 00:04:37.623 user 0m4.383s 00:04:37.623 sys 0m0.890s 00:04:37.623 18:02:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.623 18:02:55 -- common/autotest_common.sh@10 -- # set +x 00:04:37.623 18:02:56 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:37.623 18:02:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.623 18:02:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.623 18:02:56 -- common/autotest_common.sh@10 -- # set +x 00:04:37.623 ************************************ 00:04:37.623 START TEST locking_app_on_unlocked_coremask 00:04:37.623 ************************************ 00:04:37.623 18:02:56 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:04:37.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.623 18:02:56 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55571 00:04:37.623 18:02:56 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:37.623 18:02:56 -- event/cpu_locks.sh@99 -- # waitforlisten 55571 /var/tmp/spdk.sock 00:04:37.623 18:02:56 -- common/autotest_common.sh@829 -- # '[' -z 55571 ']' 00:04:37.623 18:02:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.623 18:02:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.623 18:02:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.623 18:02:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.623 18:02:56 -- common/autotest_common.sh@10 -- # set +x 00:04:37.623 [2024-11-18 18:02:56.088181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:37.623 [2024-11-18 18:02:56.088434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55571 ] 00:04:37.927 [2024-11-18 18:02:56.222211] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:37.927 [2024-11-18 18:02:56.222426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.927 [2024-11-18 18:02:56.288942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:37.927 [2024-11-18 18:02:56.289384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.495 18:02:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.495 18:02:57 -- common/autotest_common.sh@862 -- # return 0 00:04:38.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.495 18:02:57 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55587 00:04:38.495 18:02:57 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.495 18:02:57 -- event/cpu_locks.sh@103 -- # waitforlisten 55587 /var/tmp/spdk2.sock 00:04:38.495 18:02:57 -- common/autotest_common.sh@829 -- # '[' -z 55587 ']' 00:04:38.495 18:02:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.495 18:02:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.495 18:02:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.495 18:02:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.495 18:02:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.754 [2024-11-18 18:02:57.128510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.754 [2024-11-18 18:02:57.128826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55587 ] 00:04:38.754 [2024-11-18 18:02:57.264080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.013 [2024-11-18 18:02:57.370403] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.014 [2024-11-18 18:02:57.374651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.581 18:02:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.581 18:02:58 -- common/autotest_common.sh@862 -- # return 0 00:04:39.581 18:02:58 -- event/cpu_locks.sh@105 -- # locks_exist 55587 00:04:39.581 18:02:58 -- event/cpu_locks.sh@22 -- # lslocks -p 55587 00:04:39.581 18:02:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.516 18:02:58 -- event/cpu_locks.sh@107 -- # killprocess 55571 00:04:40.516 18:02:58 -- common/autotest_common.sh@936 -- # '[' -z 55571 ']' 00:04:40.516 18:02:58 -- common/autotest_common.sh@940 -- # kill -0 55571 00:04:40.516 18:02:58 -- common/autotest_common.sh@941 -- # uname 00:04:40.516 18:02:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.516 18:02:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55571 00:04:40.516 killing process with pid 55571 00:04:40.517 18:02:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.517 18:02:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.517 18:02:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55571' 00:04:40.517 18:02:58 -- common/autotest_common.sh@955 -- # kill 55571 00:04:40.517 18:02:58 -- common/autotest_common.sh@960 -- # wait 55571 00:04:41.084 18:02:59 -- event/cpu_locks.sh@108 -- # killprocess 55587 00:04:41.084 18:02:59 -- common/autotest_common.sh@936 -- # '[' -z 55587 ']' 00:04:41.084 18:02:59 -- common/autotest_common.sh@940 -- # kill -0 55587 00:04:41.084 18:02:59 -- common/autotest_common.sh@941 -- # uname 00:04:41.084 18:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.084 18:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55587 00:04:41.084 killing process with pid 55587 00:04:41.084 18:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.084 18:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.084 18:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55587' 00:04:41.084 18:02:59 -- common/autotest_common.sh@955 -- # kill 55587 00:04:41.084 18:02:59 -- common/autotest_common.sh@960 -- # wait 55587 00:04:41.343 00:04:41.343 real 0m3.666s 00:04:41.343 user 0m4.348s 00:04:41.343 sys 0m0.867s 00:04:41.343 18:02:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.343 18:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.343 ************************************ 00:04:41.343 END TEST locking_app_on_unlocked_coremask 00:04:41.343 ************************************ 00:04:41.343 18:02:59 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:41.343 18:02:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.343 18:02:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.343 18:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.343 ************************************ 00:04:41.343 START TEST locking_app_on_locked_coremask 00:04:41.343 ************************************ 00:04:41.343 18:02:59 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:04:41.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.343 18:02:59 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55649 00:04:41.343 18:02:59 -- event/cpu_locks.sh@116 -- # waitforlisten 55649 /var/tmp/spdk.sock 00:04:41.343 18:02:59 -- common/autotest_common.sh@829 -- # '[' -z 55649 ']' 00:04:41.343 18:02:59 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.343 18:02:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.343 18:02:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.343 18:02:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.343 18:02:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.343 18:02:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.343 [2024-11-18 18:02:59.818736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:41.343 [2024-11-18 18:02:59.819074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55649 ] 00:04:41.603 [2024-11-18 18:02:59.953411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.603 [2024-11-18 18:03:00.002507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.603 [2024-11-18 18:03:00.002702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.541 18:03:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.541 18:03:00 -- common/autotest_common.sh@862 -- # return 0 00:04:42.541 18:03:00 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55665 00:04:42.541 18:03:00 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55665 /var/tmp/spdk2.sock 00:04:42.541 18:03:00 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:42.541 18:03:00 -- common/autotest_common.sh@650 -- # local es=0 00:04:42.541 18:03:00 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55665 /var/tmp/spdk2.sock 00:04:42.541 18:03:00 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:42.541 18:03:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.541 18:03:00 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:42.541 18:03:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.541 18:03:00 -- common/autotest_common.sh@653 -- # waitforlisten 55665 /var/tmp/spdk2.sock 00:04:42.541 18:03:00 -- common/autotest_common.sh@829 -- # '[' -z 55665 ']' 00:04:42.541 18:03:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.541 18:03:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.541 18:03:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.541 18:03:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.541 18:03:00 -- common/autotest_common.sh@10 -- # set +x 00:04:42.541 [2024-11-18 18:03:00.860507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.541 [2024-11-18 18:03:00.860844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55665 ] 00:04:42.541 [2024-11-18 18:03:01.001191] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55649 has claimed it. 00:04:42.541 [2024-11-18 18:03:01.001256] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:43.108 ERROR: process (pid: 55665) is no longer running 00:04:43.108 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55665) - No such process 00:04:43.108 18:03:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.108 18:03:01 -- common/autotest_common.sh@862 -- # return 1 00:04:43.109 18:03:01 -- common/autotest_common.sh@653 -- # es=1 00:04:43.109 18:03:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.109 18:03:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.109 18:03:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.109 18:03:01 -- event/cpu_locks.sh@122 -- # locks_exist 55649 00:04:43.109 18:03:01 -- event/cpu_locks.sh@22 -- # lslocks -p 55649 00:04:43.109 18:03:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.677 18:03:01 -- event/cpu_locks.sh@124 -- # killprocess 55649 00:04:43.677 18:03:01 -- common/autotest_common.sh@936 -- # '[' -z 55649 ']' 00:04:43.677 18:03:01 -- common/autotest_common.sh@940 -- # kill -0 55649 00:04:43.677 18:03:01 -- common/autotest_common.sh@941 -- # uname 00:04:43.677 18:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.677 18:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55649 00:04:43.677 killing process with pid 55649 00:04:43.677 18:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.677 18:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.677 18:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55649' 00:04:43.677 18:03:02 -- common/autotest_common.sh@955 -- # kill 55649 00:04:43.677 18:03:02 -- common/autotest_common.sh@960 -- # wait 55649 00:04:43.937 ************************************ 00:04:43.937 END TEST locking_app_on_locked_coremask 00:04:43.937 ************************************ 00:04:43.937 00:04:43.937 real 0m2.544s 00:04:43.937 user 0m3.082s 00:04:43.937 sys 0m0.540s 00:04:43.937 18:03:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.937 18:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:43.937 18:03:02 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:43.937 18:03:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.937 18:03:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.937 18:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:43.937 ************************************ 00:04:43.937 START TEST locking_overlapped_coremask 00:04:43.937 ************************************ 00:04:43.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.937 18:03:02 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:04:43.937 18:03:02 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55710 00:04:43.937 18:03:02 -- event/cpu_locks.sh@133 -- # waitforlisten 55710 /var/tmp/spdk.sock 00:04:43.937 18:03:02 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:04:43.937 18:03:02 -- common/autotest_common.sh@829 -- # '[' -z 55710 ']' 00:04:43.937 18:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.937 18:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.937 18:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.937 18:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.937 18:03:02 -- common/autotest_common.sh@10 -- # set +x 00:04:43.937 [2024-11-18 18:03:02.414614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.937 [2024-11-18 18:03:02.414709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55710 ] 00:04:44.196 [2024-11-18 18:03:02.549485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.196 [2024-11-18 18:03:02.602047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.196 [2024-11-18 18:03:02.602350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.196 [2024-11-18 18:03:02.602547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.196 [2024-11-18 18:03:02.602550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.764 18:03:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.764 18:03:03 -- common/autotest_common.sh@862 -- # return 0 00:04:44.764 18:03:03 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55728 00:04:44.764 18:03:03 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:44.764 18:03:03 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55728 /var/tmp/spdk2.sock 00:04:44.764 18:03:03 -- common/autotest_common.sh@650 -- # local es=0 00:04:44.764 18:03:03 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55728 /var/tmp/spdk2.sock 00:04:44.764 18:03:03 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:44.764 18:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.764 18:03:03 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:44.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.764 18:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.764 18:03:03 -- common/autotest_common.sh@653 -- # waitforlisten 55728 /var/tmp/spdk2.sock 00:04:44.764 18:03:03 -- common/autotest_common.sh@829 -- # '[' -z 55728 ']' 00:04:44.764 18:03:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.764 18:03:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.764 18:03:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.764 18:03:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.764 18:03:03 -- common/autotest_common.sh@10 -- # set +x 00:04:45.022 [2024-11-18 18:03:03.431330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.022 [2024-11-18 18:03:03.431414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55728 ] 00:04:45.022 [2024-11-18 18:03:03.572943] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55710 has claimed it. 00:04:45.022 [2024-11-18 18:03:03.573006] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.590 ERROR: process (pid: 55728) is no longer running 00:04:45.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55728) - No such process 00:04:45.590 18:03:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.590 18:03:04 -- common/autotest_common.sh@862 -- # return 1 00:04:45.590 18:03:04 -- common/autotest_common.sh@653 -- # es=1 00:04:45.590 18:03:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.590 18:03:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.590 18:03:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.590 18:03:04 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:45.590 18:03:04 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.590 18:03:04 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.590 18:03:04 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.590 18:03:04 -- event/cpu_locks.sh@141 -- # killprocess 55710 00:04:45.590 18:03:04 -- common/autotest_common.sh@936 -- # '[' -z 55710 ']' 00:04:45.590 18:03:04 -- common/autotest_common.sh@940 -- # kill -0 55710 00:04:45.590 18:03:04 -- common/autotest_common.sh@941 -- # uname 00:04:45.590 18:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.590 18:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55710 00:04:45.590 18:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.590 18:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.590 18:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55710' 00:04:45.590 killing process with pid 55710 00:04:45.590 18:03:04 -- common/autotest_common.sh@955 -- # kill 55710 00:04:45.590 18:03:04 -- common/autotest_common.sh@960 -- # wait 55710 00:04:46.158 00:04:46.158 real 0m2.109s 00:04:46.158 user 0m6.099s 00:04:46.158 sys 0m0.303s 00:04:46.158 18:03:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.158 18:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:46.158 ************************************ 00:04:46.158 END TEST locking_overlapped_coremask 00:04:46.158 ************************************ 00:04:46.158 18:03:04 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:46.158 18:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.158 18:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.158 18:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:46.158 ************************************ 00:04:46.158 START TEST locking_overlapped_coremask_via_rpc 00:04:46.158 ************************************ 00:04:46.158 18:03:04 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:04:46.158 18:03:04 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55774 00:04:46.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.158 18:03:04 -- event/cpu_locks.sh@149 -- # waitforlisten 55774 /var/tmp/spdk.sock 00:04:46.158 18:03:04 -- common/autotest_common.sh@829 -- # '[' -z 55774 ']' 00:04:46.158 18:03:04 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:46.158 18:03:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.158 18:03:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.158 18:03:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.158 18:03:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.158 18:03:04 -- common/autotest_common.sh@10 -- # set +x 00:04:46.158 [2024-11-18 18:03:04.572236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.158 [2024-11-18 18:03:04.573038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55774 ] 00:04:46.158 [2024-11-18 18:03:04.712329] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:46.158 [2024-11-18 18:03:04.712362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.417 [2024-11-18 18:03:04.762775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.417 [2024-11-18 18:03:04.763239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.417 [2024-11-18 18:03:04.763374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.417 [2024-11-18 18:03:04.763391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.985 18:03:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.985 18:03:05 -- common/autotest_common.sh@862 -- # return 0 00:04:46.985 18:03:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:46.985 18:03:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55792 00:04:46.985 18:03:05 -- event/cpu_locks.sh@153 -- # waitforlisten 55792 /var/tmp/spdk2.sock 00:04:46.985 18:03:05 -- common/autotest_common.sh@829 -- # '[' -z 55792 ']' 00:04:46.985 18:03:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.985 18:03:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.985 18:03:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.985 18:03:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.985 18:03:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 [2024-11-18 18:03:05.585113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.985 [2024-11-18 18:03:05.585240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55792 ] 00:04:47.245 [2024-11-18 18:03:05.730035] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.245 [2024-11-18 18:03:05.730087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:47.245 [2024-11-18 18:03:05.841074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.245 [2024-11-18 18:03:05.841375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.245 [2024-11-18 18:03:05.844686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.245 [2024-11-18 18:03:05.844686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:48.182 18:03:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.182 18:03:06 -- common/autotest_common.sh@862 -- # return 0 00:04:48.182 18:03:06 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.182 18:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.182 18:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 18:03:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.182 18:03:06 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.182 18:03:06 -- common/autotest_common.sh@650 -- # local es=0 00:04:48.182 18:03:06 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.182 18:03:06 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:48.182 18:03:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.182 18:03:06 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:48.182 18:03:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.182 18:03:06 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:48.182 18:03:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.182 18:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 [2024-11-18 18:03:06.534741] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55774 has claimed it. 00:04:48.182 request: 00:04:48.182 { 00:04:48.182 "method": "framework_enable_cpumask_locks", 00:04:48.182 "req_id": 1 00:04:48.182 } 00:04:48.182 Got JSON-RPC error response 00:04:48.182 response: 00:04:48.182 { 00:04:48.182 "code": -32603, 00:04:48.182 "message": "Failed to claim CPU core: 2" 00:04:48.182 } 00:04:48.182 18:03:06 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:48.182 18:03:06 -- common/autotest_common.sh@653 -- # es=1 00:04:48.182 18:03:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.182 18:03:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.182 18:03:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.182 18:03:06 -- event/cpu_locks.sh@158 -- # waitforlisten 55774 /var/tmp/spdk.sock 00:04:48.182 18:03:06 -- common/autotest_common.sh@829 -- # '[' -z 55774 ']' 00:04:48.182 18:03:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.182 18:03:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.182 18:03:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.182 18:03:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.182 18:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.182 18:03:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.182 18:03:06 -- common/autotest_common.sh@862 -- # return 0 00:04:48.182 18:03:06 -- event/cpu_locks.sh@159 -- # waitforlisten 55792 /var/tmp/spdk2.sock 00:04:48.182 18:03:06 -- common/autotest_common.sh@829 -- # '[' -z 55792 ']' 00:04:48.182 18:03:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.182 18:03:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.182 18:03:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.182 18:03:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.182 18:03:06 -- common/autotest_common.sh@10 -- # set +x 00:04:48.752 18:03:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.752 18:03:07 -- common/autotest_common.sh@862 -- # return 0 00:04:48.752 ************************************ 00:04:48.752 END TEST locking_overlapped_coremask_via_rpc 00:04:48.752 ************************************ 00:04:48.752 18:03:07 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:48.752 18:03:07 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:48.752 18:03:07 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:48.752 18:03:07 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:48.752 00:04:48.752 real 0m2.552s 00:04:48.752 user 0m1.312s 00:04:48.752 sys 0m0.162s 00:04:48.752 18:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.752 18:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:48.752 18:03:07 -- event/cpu_locks.sh@174 -- # cleanup 00:04:48.752 18:03:07 -- event/cpu_locks.sh@15 -- # [[ -z 55774 ]] 00:04:48.752 18:03:07 -- event/cpu_locks.sh@15 -- # killprocess 55774 00:04:48.752 18:03:07 -- common/autotest_common.sh@936 -- # '[' -z 55774 ']' 00:04:48.752 18:03:07 -- common/autotest_common.sh@940 -- # kill -0 55774 00:04:48.752 18:03:07 -- common/autotest_common.sh@941 -- # uname 00:04:48.752 18:03:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:48.752 18:03:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55774 00:04:48.752 killing process with pid 55774 00:04:48.752 18:03:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:48.752 18:03:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:48.752 18:03:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55774' 00:04:48.752 18:03:07 -- common/autotest_common.sh@955 -- # kill 55774 00:04:48.752 18:03:07 -- common/autotest_common.sh@960 -- # wait 55774 00:04:49.012 18:03:07 -- event/cpu_locks.sh@16 -- # [[ -z 55792 ]] 00:04:49.012 18:03:07 -- event/cpu_locks.sh@16 -- # killprocess 55792 00:04:49.012 18:03:07 -- common/autotest_common.sh@936 -- # '[' -z 55792 ']' 00:04:49.012 18:03:07 -- common/autotest_common.sh@940 -- # kill -0 55792 00:04:49.012 18:03:07 -- common/autotest_common.sh@941 -- # uname 00:04:49.012 18:03:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.012 18:03:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55792 00:04:49.012 killing process with pid 55792 00:04:49.012 18:03:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:49.012 18:03:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:49.012 18:03:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55792' 00:04:49.012 18:03:07 -- common/autotest_common.sh@955 -- # kill 55792 00:04:49.012 18:03:07 -- common/autotest_common.sh@960 -- # wait 55792 00:04:49.271 18:03:07 -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.271 18:03:07 -- event/cpu_locks.sh@1 -- # cleanup 00:04:49.271 18:03:07 -- event/cpu_locks.sh@15 -- # [[ -z 55774 ]] 00:04:49.271 18:03:07 -- event/cpu_locks.sh@15 -- # killprocess 55774 00:04:49.271 18:03:07 -- common/autotest_common.sh@936 -- # '[' -z 55774 ']' 00:04:49.271 18:03:07 -- common/autotest_common.sh@940 -- # kill -0 55774 00:04:49.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55774) - No such process 00:04:49.271 Process with pid 55774 is not found 00:04:49.271 18:03:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55774 is not found' 00:04:49.271 18:03:07 -- event/cpu_locks.sh@16 -- # [[ -z 55792 ]] 00:04:49.271 18:03:07 -- event/cpu_locks.sh@16 -- # killprocess 55792 00:04:49.271 18:03:07 -- common/autotest_common.sh@936 -- # '[' -z 55792 ']' 00:04:49.271 18:03:07 -- common/autotest_common.sh@940 -- # kill -0 55792 00:04:49.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55792) - No such process 00:04:49.271 18:03:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55792 is not found' 00:04:49.271 Process with pid 55792 is not found 00:04:49.271 18:03:07 -- event/cpu_locks.sh@18 -- # rm -f 00:04:49.271 ************************************ 00:04:49.271 END TEST cpu_locks 00:04:49.271 ************************************ 00:04:49.271 00:04:49.271 real 0m19.200s 00:04:49.271 user 0m34.944s 00:04:49.271 sys 0m4.241s 00:04:49.271 18:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.271 18:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 ************************************ 00:04:49.271 END TEST event 00:04:49.271 ************************************ 00:04:49.271 00:04:49.271 real 0m45.006s 00:04:49.271 user 1m27.306s 00:04:49.271 sys 0m7.410s 00:04:49.271 18:03:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.271 18:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 18:03:07 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:49.271 18:03:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.271 18:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.271 18:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 ************************************ 00:04:49.271 START TEST thread 00:04:49.271 ************************************ 00:04:49.271 18:03:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:49.530 * Looking for test storage... 00:04:49.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:49.530 18:03:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.530 18:03:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.530 18:03:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.530 18:03:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.530 18:03:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.530 18:03:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.530 18:03:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.530 18:03:07 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.530 18:03:07 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.530 18:03:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.530 18:03:07 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.530 18:03:07 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.530 18:03:07 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.530 18:03:07 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.530 18:03:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.530 18:03:07 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.530 18:03:07 -- scripts/common.sh@344 -- # : 1 00:04:49.530 18:03:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.530 18:03:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.530 18:03:07 -- scripts/common.sh@364 -- # decimal 1 00:04:49.530 18:03:07 -- scripts/common.sh@352 -- # local d=1 00:04:49.530 18:03:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.530 18:03:07 -- scripts/common.sh@354 -- # echo 1 00:04:49.530 18:03:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.530 18:03:07 -- scripts/common.sh@365 -- # decimal 2 00:04:49.530 18:03:07 -- scripts/common.sh@352 -- # local d=2 00:04:49.530 18:03:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.530 18:03:07 -- scripts/common.sh@354 -- # echo 2 00:04:49.530 18:03:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.530 18:03:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.530 18:03:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.530 18:03:07 -- scripts/common.sh@367 -- # return 0 00:04:49.530 18:03:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.530 18:03:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.530 --rc genhtml_branch_coverage=1 00:04:49.530 --rc genhtml_function_coverage=1 00:04:49.530 --rc genhtml_legend=1 00:04:49.530 --rc geninfo_all_blocks=1 00:04:49.530 --rc geninfo_unexecuted_blocks=1 00:04:49.530 00:04:49.530 ' 00:04:49.530 18:03:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.530 --rc genhtml_branch_coverage=1 00:04:49.530 --rc genhtml_function_coverage=1 00:04:49.530 --rc genhtml_legend=1 00:04:49.530 --rc geninfo_all_blocks=1 00:04:49.530 --rc geninfo_unexecuted_blocks=1 00:04:49.530 00:04:49.530 ' 00:04:49.530 18:03:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.530 --rc genhtml_branch_coverage=1 00:04:49.530 --rc genhtml_function_coverage=1 00:04:49.530 --rc genhtml_legend=1 00:04:49.530 --rc geninfo_all_blocks=1 00:04:49.530 --rc geninfo_unexecuted_blocks=1 00:04:49.530 00:04:49.530 ' 00:04:49.531 18:03:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.531 --rc genhtml_branch_coverage=1 00:04:49.531 --rc genhtml_function_coverage=1 00:04:49.531 --rc genhtml_legend=1 00:04:49.531 --rc geninfo_all_blocks=1 00:04:49.531 --rc geninfo_unexecuted_blocks=1 00:04:49.531 00:04:49.531 ' 00:04:49.531 18:03:07 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.531 18:03:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:49.531 18:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.531 18:03:07 -- common/autotest_common.sh@10 -- # set +x 00:04:49.531 ************************************ 00:04:49.531 START TEST thread_poller_perf 00:04:49.531 ************************************ 00:04:49.531 18:03:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.531 [2024-11-18 18:03:08.022887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.531 [2024-11-18 18:03:08.023150] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55916 ] 00:04:49.790 [2024-11-18 18:03:08.162047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.790 [2024-11-18 18:03:08.209980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.790 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.726 [2024-11-18T18:03:09.330Z] ====================================== 00:04:50.726 [2024-11-18T18:03:09.330Z] busy:2211381024 (cyc) 00:04:50.726 [2024-11-18T18:03:09.330Z] total_run_count: 355000 00:04:50.726 [2024-11-18T18:03:09.330Z] tsc_hz: 2200000000 (cyc) 00:04:50.726 [2024-11-18T18:03:09.330Z] ====================================== 00:04:50.726 [2024-11-18T18:03:09.330Z] poller_cost: 6229 (cyc), 2831 (nsec) 00:04:50.726 00:04:50.726 ************************************ 00:04:50.726 END TEST thread_poller_perf 00:04:50.726 ************************************ 00:04:50.726 real 0m1.292s 00:04:50.726 user 0m1.144s 00:04:50.726 sys 0m0.040s 00:04:50.726 18:03:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.726 18:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.986 18:03:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.986 18:03:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:50.986 18:03:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.986 18:03:09 -- common/autotest_common.sh@10 -- # set +x 00:04:50.986 ************************************ 00:04:50.986 START TEST thread_poller_perf 00:04:50.986 ************************************ 00:04:50.986 18:03:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:50.986 [2024-11-18 18:03:09.375426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:50.986 [2024-11-18 18:03:09.375720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55946 ] 00:04:50.986 [2024-11-18 18:03:09.513717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.986 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:50.986 [2024-11-18 18:03:09.565144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.366 [2024-11-18T18:03:10.970Z] ====================================== 00:04:52.366 [2024-11-18T18:03:10.970Z] busy:2202706874 (cyc) 00:04:52.366 [2024-11-18T18:03:10.970Z] total_run_count: 4902000 00:04:52.366 [2024-11-18T18:03:10.970Z] tsc_hz: 2200000000 (cyc) 00:04:52.366 [2024-11-18T18:03:10.970Z] ====================================== 00:04:52.366 [2024-11-18T18:03:10.970Z] poller_cost: 449 (cyc), 204 (nsec) 00:04:52.366 ************************************ 00:04:52.366 END TEST thread_poller_perf 00:04:52.366 ************************************ 00:04:52.366 00:04:52.366 real 0m1.297s 00:04:52.366 user 0m1.146s 00:04:52.366 sys 0m0.042s 00:04:52.366 18:03:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.366 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.366 18:03:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:52.366 ************************************ 00:04:52.366 END TEST thread 00:04:52.366 ************************************ 00:04:52.366 00:04:52.366 real 0m2.880s 00:04:52.366 user 0m2.448s 00:04:52.366 sys 0m0.207s 00:04:52.366 18:03:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.366 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.366 18:03:10 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:52.366 18:03:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.366 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.366 ************************************ 00:04:52.366 START TEST accel 00:04:52.366 ************************************ 00:04:52.366 18:03:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:52.366 * Looking for test storage... 00:04:52.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:52.366 18:03:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.366 18:03:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.366 18:03:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.366 18:03:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.366 18:03:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.366 18:03:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.366 18:03:10 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.366 18:03:10 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.366 18:03:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.366 18:03:10 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.366 18:03:10 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.366 18:03:10 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.366 18:03:10 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.366 18:03:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.366 18:03:10 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.366 18:03:10 -- scripts/common.sh@344 -- # : 1 00:04:52.366 18:03:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.366 18:03:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.366 18:03:10 -- scripts/common.sh@364 -- # decimal 1 00:04:52.366 18:03:10 -- scripts/common.sh@352 -- # local d=1 00:04:52.366 18:03:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.366 18:03:10 -- scripts/common.sh@354 -- # echo 1 00:04:52.366 18:03:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.366 18:03:10 -- scripts/common.sh@365 -- # decimal 2 00:04:52.366 18:03:10 -- scripts/common.sh@352 -- # local d=2 00:04:52.366 18:03:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.366 18:03:10 -- scripts/common.sh@354 -- # echo 2 00:04:52.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.366 18:03:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.366 18:03:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.366 18:03:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.366 18:03:10 -- scripts/common.sh@367 -- # return 0 00:04:52.366 18:03:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.366 --rc genhtml_branch_coverage=1 00:04:52.366 --rc genhtml_function_coverage=1 00:04:52.366 --rc genhtml_legend=1 00:04:52.366 --rc geninfo_all_blocks=1 00:04:52.366 --rc geninfo_unexecuted_blocks=1 00:04:52.366 00:04:52.366 ' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.366 --rc genhtml_branch_coverage=1 00:04:52.366 --rc genhtml_function_coverage=1 00:04:52.366 --rc genhtml_legend=1 00:04:52.366 --rc geninfo_all_blocks=1 00:04:52.366 --rc geninfo_unexecuted_blocks=1 00:04:52.366 00:04:52.366 ' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.366 --rc genhtml_branch_coverage=1 00:04:52.366 --rc genhtml_function_coverage=1 00:04:52.366 --rc genhtml_legend=1 00:04:52.366 --rc geninfo_all_blocks=1 00:04:52.366 --rc geninfo_unexecuted_blocks=1 00:04:52.366 00:04:52.366 ' 00:04:52.366 18:03:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.366 --rc genhtml_branch_coverage=1 00:04:52.366 --rc genhtml_function_coverage=1 00:04:52.366 --rc genhtml_legend=1 00:04:52.366 --rc geninfo_all_blocks=1 00:04:52.366 --rc geninfo_unexecuted_blocks=1 00:04:52.366 00:04:52.366 ' 00:04:52.366 18:03:10 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:52.366 18:03:10 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:52.366 18:03:10 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.366 18:03:10 -- accel/accel.sh@59 -- # spdk_tgt_pid=56033 00:04:52.366 18:03:10 -- accel/accel.sh@60 -- # waitforlisten 56033 00:04:52.366 18:03:10 -- common/autotest_common.sh@829 -- # '[' -z 56033 ']' 00:04:52.366 18:03:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.367 18:03:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.367 18:03:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.367 18:03:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.367 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.367 18:03:10 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:52.367 18:03:10 -- accel/accel.sh@58 -- # build_accel_config 00:04:52.367 18:03:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:52.367 18:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.367 18:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.367 18:03:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:52.367 18:03:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:52.367 18:03:10 -- accel/accel.sh@41 -- # local IFS=, 00:04:52.367 18:03:10 -- accel/accel.sh@42 -- # jq -r . 00:04:52.626 [2024-11-18 18:03:10.981325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:52.626 [2024-11-18 18:03:10.981649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56033 ] 00:04:52.626 [2024-11-18 18:03:11.119052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.626 [2024-11-18 18:03:11.167626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.626 [2024-11-18 18:03:11.168048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.564 18:03:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.564 18:03:11 -- common/autotest_common.sh@862 -- # return 0 00:04:53.564 18:03:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:53.564 18:03:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:53.564 18:03:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:53.564 18:03:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.564 18:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.564 18:03:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # IFS== 00:04:53.564 18:03:12 -- accel/accel.sh@64 -- # read -r opc module 00:04:53.564 18:03:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:53.564 18:03:12 -- accel/accel.sh@67 -- # killprocess 56033 00:04:53.564 18:03:12 -- common/autotest_common.sh@936 -- # '[' -z 56033 ']' 00:04:53.564 18:03:12 -- common/autotest_common.sh@940 -- # kill -0 56033 00:04:53.564 18:03:12 -- common/autotest_common.sh@941 -- # uname 00:04:53.564 18:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.564 18:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56033 00:04:53.564 18:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.564 18:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.565 18:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56033' 00:04:53.565 killing process with pid 56033 00:04:53.565 18:03:12 -- common/autotest_common.sh@955 -- # kill 56033 00:04:53.565 18:03:12 -- common/autotest_common.sh@960 -- # wait 56033 00:04:53.824 18:03:12 -- accel/accel.sh@68 -- # trap - ERR 00:04:53.824 18:03:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:53.824 18:03:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:53.824 18:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.824 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.824 18:03:12 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:04:53.824 18:03:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:53.824 18:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.824 18:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:53.824 18:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:53.824 18:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:04:53.824 18:03:12 -- accel/accel.sh@42 -- # jq -r . 00:04:53.824 18:03:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.824 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.824 18:03:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:53.824 18:03:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:53.824 18:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.824 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.824 ************************************ 00:04:53.824 START TEST accel_missing_filename 00:04:53.824 ************************************ 00:04:53.824 18:03:12 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:04:53.824 18:03:12 -- common/autotest_common.sh@650 -- # local es=0 00:04:53.824 18:03:12 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:53.824 18:03:12 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:53.824 18:03:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.824 18:03:12 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:53.824 18:03:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.824 18:03:12 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:04:53.824 18:03:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:53.824 18:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.824 18:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:53.824 18:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:53.824 18:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:53.824 18:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:04:53.824 18:03:12 -- accel/accel.sh@42 -- # jq -r . 00:04:53.824 [2024-11-18 18:03:12.424058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:53.824 [2024-11-18 18:03:12.424144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56079 ] 00:04:54.084 [2024-11-18 18:03:12.559677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.084 [2024-11-18 18:03:12.615006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.084 [2024-11-18 18:03:12.646271] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.084 [2024-11-18 18:03:12.683318] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:54.343 A filename is required. 00:04:54.343 18:03:12 -- common/autotest_common.sh@653 -- # es=234 00:04:54.343 18:03:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.343 18:03:12 -- common/autotest_common.sh@662 -- # es=106 00:04:54.343 18:03:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:54.343 18:03:12 -- common/autotest_common.sh@670 -- # es=1 00:04:54.343 18:03:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.343 00:04:54.343 real 0m0.364s 00:04:54.343 user 0m0.241s 00:04:54.343 sys 0m0.069s 00:04:54.343 18:03:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.343 ************************************ 00:04:54.343 END TEST accel_missing_filename 00:04:54.343 ************************************ 00:04:54.343 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:54.343 18:03:12 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.343 18:03:12 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:54.343 18:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.343 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:54.343 ************************************ 00:04:54.343 START TEST accel_compress_verify 00:04:54.343 ************************************ 00:04:54.343 18:03:12 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.343 18:03:12 -- common/autotest_common.sh@650 -- # local es=0 00:04:54.343 18:03:12 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.343 18:03:12 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:54.343 18:03:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.343 18:03:12 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:54.343 18:03:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.343 18:03:12 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.343 18:03:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.343 18:03:12 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.343 18:03:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.343 18:03:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.343 18:03:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.343 18:03:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.343 18:03:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.343 18:03:12 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.343 18:03:12 -- accel/accel.sh@42 -- # jq -r . 00:04:54.343 [2024-11-18 18:03:12.839765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.344 [2024-11-18 18:03:12.840437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56109 ] 00:04:54.603 [2024-11-18 18:03:12.972592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.603 [2024-11-18 18:03:13.022439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.603 [2024-11-18 18:03:13.049424] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:54.603 [2024-11-18 18:03:13.088857] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:54.603 00:04:54.603 Compression does not support the verify option, aborting. 00:04:54.603 18:03:13 -- common/autotest_common.sh@653 -- # es=161 00:04:54.603 18:03:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.603 18:03:13 -- common/autotest_common.sh@662 -- # es=33 00:04:54.603 18:03:13 -- common/autotest_common.sh@663 -- # case "$es" in 00:04:54.603 18:03:13 -- common/autotest_common.sh@670 -- # es=1 00:04:54.603 18:03:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.603 00:04:54.603 real 0m0.358s 00:04:54.603 user 0m0.237s 00:04:54.603 sys 0m0.067s 00:04:54.603 ************************************ 00:04:54.603 END TEST accel_compress_verify 00:04:54.603 ************************************ 00:04:54.603 18:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.603 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.862 18:03:13 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:54.863 18:03:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:54.863 18:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.863 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 ************************************ 00:04:54.863 START TEST accel_wrong_workload 00:04:54.863 ************************************ 00:04:54.863 18:03:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:04:54.863 18:03:13 -- common/autotest_common.sh@650 -- # local es=0 00:04:54.863 18:03:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:54.863 18:03:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.863 18:03:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.863 18:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.863 18:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.863 18:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.863 18:03:13 -- accel/accel.sh@42 -- # jq -r . 00:04:54.863 Unsupported workload type: foobar 00:04:54.863 [2024-11-18 18:03:13.245120] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:54.863 accel_perf options: 00:04:54.863 [-h help message] 00:04:54.863 [-q queue depth per core] 00:04:54.863 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:54.863 [-T number of threads per core 00:04:54.863 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:54.863 [-t time in seconds] 00:04:54.863 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:54.863 [ dif_verify, , dif_generate, dif_generate_copy 00:04:54.863 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:54.863 [-l for compress/decompress workloads, name of uncompressed input file 00:04:54.863 [-S for crc32c workload, use this seed value (default 0) 00:04:54.863 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:54.863 [-f for fill workload, use this BYTE value (default 255) 00:04:54.863 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:54.863 [-y verify result if this switch is on] 00:04:54.863 [-a tasks to allocate per core (default: same value as -q)] 00:04:54.863 Can be used to spread operations across a wider range of memory. 00:04:54.863 18:03:13 -- common/autotest_common.sh@653 -- # es=1 00:04:54.863 18:03:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.863 18:03:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.863 ************************************ 00:04:54.863 END TEST accel_wrong_workload 00:04:54.863 ************************************ 00:04:54.863 18:03:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.863 00:04:54.863 real 0m0.030s 00:04:54.863 user 0m0.012s 00:04:54.863 sys 0m0.017s 00:04:54.863 18:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.863 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 18:03:13 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:54.863 18:03:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:54.863 18:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.863 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 ************************************ 00:04:54.863 START TEST accel_negative_buffers 00:04:54.863 ************************************ 00:04:54.863 18:03:13 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:54.863 18:03:13 -- common/autotest_common.sh@650 -- # local es=0 00:04:54.863 18:03:13 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:54.863 18:03:13 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:04:54.863 18:03:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.863 18:03:13 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.863 18:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.863 18:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.863 18:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.863 18:03:13 -- accel/accel.sh@42 -- # jq -r . 00:04:54.863 -x option must be non-negative. 00:04:54.863 [2024-11-18 18:03:13.320981] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:54.863 accel_perf options: 00:04:54.863 [-h help message] 00:04:54.863 [-q queue depth per core] 00:04:54.863 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:54.863 [-T number of threads per core 00:04:54.863 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:54.863 [-t time in seconds] 00:04:54.863 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:54.863 [ dif_verify, , dif_generate, dif_generate_copy 00:04:54.863 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:54.863 [-l for compress/decompress workloads, name of uncompressed input file 00:04:54.863 [-S for crc32c workload, use this seed value (default 0) 00:04:54.863 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:54.863 [-f for fill workload, use this BYTE value (default 255) 00:04:54.863 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:54.863 [-y verify result if this switch is on] 00:04:54.863 [-a tasks to allocate per core (default: same value as -q)] 00:04:54.863 Can be used to spread operations across a wider range of memory. 00:04:54.863 18:03:13 -- common/autotest_common.sh@653 -- # es=1 00:04:54.863 18:03:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.863 18:03:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.863 ************************************ 00:04:54.863 END TEST accel_negative_buffers 00:04:54.863 ************************************ 00:04:54.863 18:03:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.863 00:04:54.863 real 0m0.027s 00:04:54.863 user 0m0.017s 00:04:54.863 sys 0m0.010s 00:04:54.863 18:03:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.863 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 18:03:13 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:54.863 18:03:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:54.863 18:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.863 18:03:13 -- common/autotest_common.sh@10 -- # set +x 00:04:54.863 ************************************ 00:04:54.863 START TEST accel_crc32c 00:04:54.863 ************************************ 00:04:54.863 18:03:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:54.863 18:03:13 -- accel/accel.sh@16 -- # local accel_opc 00:04:54.863 18:03:13 -- accel/accel.sh@17 -- # local accel_module 00:04:54.863 18:03:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:54.863 18:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:54.863 18:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.863 18:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.863 18:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.863 18:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.863 18:03:13 -- accel/accel.sh@42 -- # jq -r . 00:04:54.863 [2024-11-18 18:03:13.395958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.863 [2024-11-18 18:03:13.396041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56162 ] 00:04:55.123 [2024-11-18 18:03:13.523826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.123 [2024-11-18 18:03:13.571403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.501 18:03:14 -- accel/accel.sh@18 -- # out=' 00:04:56.501 SPDK Configuration: 00:04:56.501 Core mask: 0x1 00:04:56.501 00:04:56.501 Accel Perf Configuration: 00:04:56.501 Workload Type: crc32c 00:04:56.501 CRC-32C seed: 32 00:04:56.501 Transfer size: 4096 bytes 00:04:56.501 Vector count 1 00:04:56.501 Module: software 00:04:56.501 Queue depth: 32 00:04:56.501 Allocate depth: 32 00:04:56.501 # threads/core: 1 00:04:56.501 Run time: 1 seconds 00:04:56.501 Verify: Yes 00:04:56.501 00:04:56.501 Running for 1 seconds... 00:04:56.501 00:04:56.501 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:56.501 ------------------------------------------------------------------------------------ 00:04:56.501 0,0 531072/s 2074 MiB/s 0 0 00:04:56.502 ==================================================================================== 00:04:56.502 Total 531072/s 2074 MiB/s 0 0' 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:56.502 18:03:14 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.502 18:03:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:56.502 18:03:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:56.502 18:03:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.502 18:03:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.502 18:03:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:56.502 18:03:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:56.502 18:03:14 -- accel/accel.sh@41 -- # local IFS=, 00:04:56.502 18:03:14 -- accel/accel.sh@42 -- # jq -r . 00:04:56.502 [2024-11-18 18:03:14.762923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.502 [2024-11-18 18:03:14.763589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56187 ] 00:04:56.502 [2024-11-18 18:03:14.895914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.502 [2024-11-18 18:03:14.942524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=0x1 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=crc32c 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=32 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=software 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@23 -- # accel_module=software 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=32 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=32 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=1 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val=Yes 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:56.502 18:03:14 -- accel/accel.sh@21 -- # val= 00:04:56.502 18:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # IFS=: 00:04:56.502 18:03:14 -- accel/accel.sh@20 -- # read -r var val 00:04:57.879 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.879 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.879 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.879 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.879 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.879 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.879 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.880 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.880 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.880 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.880 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.880 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.880 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.880 ************************************ 00:04:57.880 END TEST accel_crc32c 00:04:57.880 ************************************ 00:04:57.880 18:03:16 -- accel/accel.sh@21 -- # val= 00:04:57.880 18:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # IFS=: 00:04:57.880 18:03:16 -- accel/accel.sh@20 -- # read -r var val 00:04:57.880 18:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:57.880 18:03:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:57.880 18:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.880 00:04:57.880 real 0m2.718s 00:04:57.880 user 0m2.375s 00:04:57.880 sys 0m0.140s 00:04:57.880 18:03:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.880 18:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.880 18:03:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:57.880 18:03:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:57.880 18:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.880 18:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.880 ************************************ 00:04:57.880 START TEST accel_crc32c_C2 00:04:57.880 ************************************ 00:04:57.880 18:03:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:57.880 18:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.880 18:03:16 -- accel/accel.sh@17 -- # local accel_module 00:04:57.880 18:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:57.880 18:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:57.880 18:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.880 18:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.880 18:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.880 18:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.880 18:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.880 18:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.880 18:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.880 18:03:16 -- accel/accel.sh@42 -- # jq -r . 00:04:57.880 [2024-11-18 18:03:16.162985] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.880 [2024-11-18 18:03:16.163065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56216 ] 00:04:57.880 [2024-11-18 18:03:16.291054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.880 [2024-11-18 18:03:16.337274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.257 18:03:17 -- accel/accel.sh@18 -- # out=' 00:04:59.257 SPDK Configuration: 00:04:59.257 Core mask: 0x1 00:04:59.257 00:04:59.257 Accel Perf Configuration: 00:04:59.257 Workload Type: crc32c 00:04:59.257 CRC-32C seed: 0 00:04:59.257 Transfer size: 4096 bytes 00:04:59.257 Vector count 2 00:04:59.257 Module: software 00:04:59.257 Queue depth: 32 00:04:59.257 Allocate depth: 32 00:04:59.257 # threads/core: 1 00:04:59.257 Run time: 1 seconds 00:04:59.257 Verify: Yes 00:04:59.257 00:04:59.257 Running for 1 seconds... 00:04:59.257 00:04:59.257 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:59.257 ------------------------------------------------------------------------------------ 00:04:59.257 0,0 411200/s 3212 MiB/s 0 0 00:04:59.257 ==================================================================================== 00:04:59.257 Total 411200/s 1606 MiB/s 0 0' 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:59.257 18:03:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:59.257 18:03:17 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.257 18:03:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:59.257 18:03:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.257 18:03:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.257 18:03:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:59.257 18:03:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:59.257 18:03:17 -- accel/accel.sh@41 -- # local IFS=, 00:04:59.257 18:03:17 -- accel/accel.sh@42 -- # jq -r . 00:04:59.257 [2024-11-18 18:03:17.506209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.257 [2024-11-18 18:03:17.506302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56230 ] 00:04:59.257 [2024-11-18 18:03:17.640109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.257 [2024-11-18 18:03:17.686351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=0x1 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=crc32c 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=0 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=software 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@23 -- # accel_module=software 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=32 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=32 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=1 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val=Yes 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:04:59.257 18:03:17 -- accel/accel.sh@21 -- # val= 00:04:59.257 18:03:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # IFS=: 00:04:59.257 18:03:17 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@21 -- # val= 00:05:00.267 18:03:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # IFS=: 00:05:00.267 ************************************ 00:05:00.267 END TEST accel_crc32c_C2 00:05:00.267 ************************************ 00:05:00.267 18:03:18 -- accel/accel.sh@20 -- # read -r var val 00:05:00.267 18:03:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:00.267 18:03:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:00.267 18:03:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.267 00:05:00.267 real 0m2.710s 00:05:00.267 user 0m2.383s 00:05:00.267 sys 0m0.123s 00:05:00.267 18:03:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.267 18:03:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.526 18:03:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:00.526 18:03:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:00.526 18:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.526 18:03:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.526 ************************************ 00:05:00.526 START TEST accel_copy 00:05:00.526 ************************************ 00:05:00.526 18:03:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:00.526 18:03:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.526 18:03:18 -- accel/accel.sh@17 -- # local accel_module 00:05:00.526 18:03:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:00.526 18:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:00.526 18:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.526 18:03:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.526 18:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.526 18:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.526 18:03:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.526 18:03:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.526 18:03:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.526 18:03:18 -- accel/accel.sh@42 -- # jq -r . 00:05:00.526 [2024-11-18 18:03:18.925550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.526 [2024-11-18 18:03:18.925650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56270 ] 00:05:00.526 [2024-11-18 18:03:19.064238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.526 [2024-11-18 18:03:19.110247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.902 18:03:20 -- accel/accel.sh@18 -- # out=' 00:05:01.902 SPDK Configuration: 00:05:01.902 Core mask: 0x1 00:05:01.902 00:05:01.902 Accel Perf Configuration: 00:05:01.902 Workload Type: copy 00:05:01.902 Transfer size: 4096 bytes 00:05:01.902 Vector count 1 00:05:01.902 Module: software 00:05:01.902 Queue depth: 32 00:05:01.902 Allocate depth: 32 00:05:01.902 # threads/core: 1 00:05:01.902 Run time: 1 seconds 00:05:01.902 Verify: Yes 00:05:01.902 00:05:01.902 Running for 1 seconds... 00:05:01.902 00:05:01.902 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:01.902 ------------------------------------------------------------------------------------ 00:05:01.902 0,0 362784/s 1417 MiB/s 0 0 00:05:01.902 ==================================================================================== 00:05:01.902 Total 362784/s 1417 MiB/s 0 0' 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:01.902 18:03:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:01.902 18:03:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.902 18:03:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:01.902 18:03:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.902 18:03:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.902 18:03:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:01.902 18:03:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:01.902 18:03:20 -- accel/accel.sh@41 -- # local IFS=, 00:05:01.902 18:03:20 -- accel/accel.sh@42 -- # jq -r . 00:05:01.902 [2024-11-18 18:03:20.281068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.902 [2024-11-18 18:03:20.281158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56284 ] 00:05:01.902 [2024-11-18 18:03:20.412564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.902 [2024-11-18 18:03:20.458689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=0x1 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=copy 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=software 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@23 -- # accel_module=software 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=32 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=32 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=1 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val=Yes 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:01.902 18:03:20 -- accel/accel.sh@21 -- # val= 00:05:01.902 18:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # IFS=: 00:05:01.902 18:03:20 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 ************************************ 00:05:03.280 END TEST accel_copy 00:05:03.280 ************************************ 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@21 -- # val= 00:05:03.280 18:03:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # IFS=: 00:05:03.280 18:03:21 -- accel/accel.sh@20 -- # read -r var val 00:05:03.280 18:03:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:03.280 18:03:21 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:03.280 18:03:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.280 00:05:03.280 real 0m2.707s 00:05:03.280 user 0m2.366s 00:05:03.280 sys 0m0.141s 00:05:03.280 18:03:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.280 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:03.280 18:03:21 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.280 18:03:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:03.280 18:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.280 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:05:03.280 ************************************ 00:05:03.280 START TEST accel_fill 00:05:03.280 ************************************ 00:05:03.280 18:03:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.280 18:03:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:03.280 18:03:21 -- accel/accel.sh@17 -- # local accel_module 00:05:03.281 18:03:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.281 18:03:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.281 18:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:03.281 18:03:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:03.281 18:03:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.281 18:03:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.281 18:03:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:03.281 18:03:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:03.281 18:03:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:03.281 18:03:21 -- accel/accel.sh@42 -- # jq -r . 00:05:03.281 [2024-11-18 18:03:21.688930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.281 [2024-11-18 18:03:21.689178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56313 ] 00:05:03.281 [2024-11-18 18:03:21.821011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.281 [2024-11-18 18:03:21.871891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.658 18:03:23 -- accel/accel.sh@18 -- # out=' 00:05:04.658 SPDK Configuration: 00:05:04.658 Core mask: 0x1 00:05:04.658 00:05:04.658 Accel Perf Configuration: 00:05:04.658 Workload Type: fill 00:05:04.658 Fill pattern: 0x80 00:05:04.658 Transfer size: 4096 bytes 00:05:04.658 Vector count 1 00:05:04.658 Module: software 00:05:04.658 Queue depth: 64 00:05:04.658 Allocate depth: 64 00:05:04.658 # threads/core: 1 00:05:04.658 Run time: 1 seconds 00:05:04.658 Verify: Yes 00:05:04.658 00:05:04.658 Running for 1 seconds... 00:05:04.658 00:05:04.658 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:04.658 ------------------------------------------------------------------------------------ 00:05:04.658 0,0 532800/s 2081 MiB/s 0 0 00:05:04.658 ==================================================================================== 00:05:04.658 Total 532800/s 2081 MiB/s 0 0' 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.658 18:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.658 18:03:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.658 18:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.658 18:03:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.658 18:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.658 18:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.658 18:03:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.658 18:03:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.658 18:03:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.658 18:03:23 -- accel/accel.sh@42 -- # jq -r . 00:05:04.658 [2024-11-18 18:03:23.047561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:04.658 [2024-11-18 18:03:23.047649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56333 ] 00:05:04.658 [2024-11-18 18:03:23.182118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.658 [2024-11-18 18:03:23.228364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.658 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.658 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.658 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.658 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.658 18:03:23 -- accel/accel.sh@21 -- # val=0x1 00:05:04.658 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.658 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.658 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.658 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.658 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.658 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=fill 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=0x80 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=software 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@23 -- # accel_module=software 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=64 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=64 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val=1 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.917 18:03:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:04.917 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.917 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.918 18:03:23 -- accel/accel.sh@21 -- # val=Yes 00:05:04.918 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.918 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.918 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:04.918 18:03:23 -- accel/accel.sh@21 -- # val= 00:05:04.918 18:03:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # IFS=: 00:05:04.918 18:03:23 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@21 -- # val= 00:05:05.853 18:03:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # IFS=: 00:05:05.853 18:03:24 -- accel/accel.sh@20 -- # read -r var val 00:05:05.853 18:03:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:05.853 18:03:24 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:05.853 18:03:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.853 00:05:05.853 real 0m2.715s 00:05:05.853 user 0m2.384s 00:05:05.853 sys 0m0.132s 00:05:05.853 18:03:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.853 18:03:24 -- common/autotest_common.sh@10 -- # set +x 00:05:05.853 ************************************ 00:05:05.853 END TEST accel_fill 00:05:05.853 ************************************ 00:05:05.853 18:03:24 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:05.853 18:03:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:05.853 18:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.853 18:03:24 -- common/autotest_common.sh@10 -- # set +x 00:05:05.853 ************************************ 00:05:05.853 START TEST accel_copy_crc32c 00:05:05.853 ************************************ 00:05:05.853 18:03:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:05.853 18:03:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.853 18:03:24 -- accel/accel.sh@17 -- # local accel_module 00:05:05.853 18:03:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:05.853 18:03:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:05.853 18:03:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.853 18:03:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:05.853 18:03:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.853 18:03:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.853 18:03:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:05.853 18:03:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:05.853 18:03:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:05.853 18:03:24 -- accel/accel.sh@42 -- # jq -r . 00:05:06.112 [2024-11-18 18:03:24.458836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.112 [2024-11-18 18:03:24.459106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56367 ] 00:05:06.112 [2024-11-18 18:03:24.594780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.112 [2024-11-18 18:03:24.644235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.488 18:03:25 -- accel/accel.sh@18 -- # out=' 00:05:07.488 SPDK Configuration: 00:05:07.488 Core mask: 0x1 00:05:07.488 00:05:07.488 Accel Perf Configuration: 00:05:07.488 Workload Type: copy_crc32c 00:05:07.488 CRC-32C seed: 0 00:05:07.488 Vector size: 4096 bytes 00:05:07.488 Transfer size: 4096 bytes 00:05:07.488 Vector count 1 00:05:07.488 Module: software 00:05:07.488 Queue depth: 32 00:05:07.488 Allocate depth: 32 00:05:07.488 # threads/core: 1 00:05:07.488 Run time: 1 seconds 00:05:07.488 Verify: Yes 00:05:07.488 00:05:07.488 Running for 1 seconds... 00:05:07.488 00:05:07.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:07.488 ------------------------------------------------------------------------------------ 00:05:07.488 0,0 280992/s 1097 MiB/s 0 0 00:05:07.488 ==================================================================================== 00:05:07.488 Total 280992/s 1097 MiB/s 0 0' 00:05:07.488 18:03:25 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:07.488 18:03:25 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:07.488 18:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.488 18:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.488 18:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.488 18:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.488 18:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.488 18:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.488 18:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.488 18:03:25 -- accel/accel.sh@42 -- # jq -r . 00:05:07.488 [2024-11-18 18:03:25.814665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.488 [2024-11-18 18:03:25.814909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56381 ] 00:05:07.488 [2024-11-18 18:03:25.950327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.488 [2024-11-18 18:03:25.997498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=0x1 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=0 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=software 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@23 -- # accel_module=software 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=32 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=32 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=1 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val=Yes 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:07.488 18:03:26 -- accel/accel.sh@21 -- # val= 00:05:07.488 18:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # IFS=: 00:05:07.488 18:03:26 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@21 -- # val= 00:05:08.865 18:03:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # IFS=: 00:05:08.865 18:03:27 -- accel/accel.sh@20 -- # read -r var val 00:05:08.865 18:03:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:08.865 18:03:27 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:08.865 18:03:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.865 00:05:08.865 real 0m2.725s 00:05:08.865 user 0m2.394s 00:05:08.865 sys 0m0.130s 00:05:08.865 18:03:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.865 18:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.865 ************************************ 00:05:08.865 END TEST accel_copy_crc32c 00:05:08.865 ************************************ 00:05:08.865 18:03:27 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:08.865 18:03:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:08.865 18:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.865 18:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.865 ************************************ 00:05:08.865 START TEST accel_copy_crc32c_C2 00:05:08.865 ************************************ 00:05:08.865 18:03:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:08.865 18:03:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.865 18:03:27 -- accel/accel.sh@17 -- # local accel_module 00:05:08.865 18:03:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:08.865 18:03:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:08.865 18:03:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.865 18:03:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.865 18:03:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.865 18:03:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.865 18:03:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.865 18:03:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.865 18:03:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.865 18:03:27 -- accel/accel.sh@42 -- # jq -r . 00:05:08.865 [2024-11-18 18:03:27.242114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.865 [2024-11-18 18:03:27.242213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56416 ] 00:05:08.865 [2024-11-18 18:03:27.377241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.865 [2024-11-18 18:03:27.423874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.241 18:03:28 -- accel/accel.sh@18 -- # out=' 00:05:10.241 SPDK Configuration: 00:05:10.241 Core mask: 0x1 00:05:10.241 00:05:10.241 Accel Perf Configuration: 00:05:10.241 Workload Type: copy_crc32c 00:05:10.241 CRC-32C seed: 0 00:05:10.241 Vector size: 4096 bytes 00:05:10.241 Transfer size: 8192 bytes 00:05:10.241 Vector count 2 00:05:10.241 Module: software 00:05:10.241 Queue depth: 32 00:05:10.241 Allocate depth: 32 00:05:10.241 # threads/core: 1 00:05:10.241 Run time: 1 seconds 00:05:10.241 Verify: Yes 00:05:10.241 00:05:10.242 Running for 1 seconds... 00:05:10.242 00:05:10.242 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:10.242 ------------------------------------------------------------------------------------ 00:05:10.242 0,0 204640/s 1598 MiB/s 0 0 00:05:10.242 ==================================================================================== 00:05:10.242 Total 204640/s 799 MiB/s 0 0' 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:10.242 18:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:10.242 18:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.242 18:03:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.242 18:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.242 18:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.242 18:03:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.242 18:03:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.242 18:03:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.242 18:03:28 -- accel/accel.sh@42 -- # jq -r . 00:05:10.242 [2024-11-18 18:03:28.611023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.242 [2024-11-18 18:03:28.611107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56435 ] 00:05:10.242 [2024-11-18 18:03:28.746369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.242 [2024-11-18 18:03:28.792829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=0x1 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=0 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=software 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=32 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=32 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=1 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val=Yes 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:10.242 18:03:28 -- accel/accel.sh@21 -- # val= 00:05:10.242 18:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # IFS=: 00:05:10.242 18:03:28 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@21 -- # val= 00:05:11.628 18:03:29 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # IFS=: 00:05:11.628 18:03:29 -- accel/accel.sh@20 -- # read -r var val 00:05:11.628 18:03:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:11.628 18:03:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:11.628 18:03:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.628 00:05:11.628 real 0m2.725s 00:05:11.628 user 0m2.396s 00:05:11.628 sys 0m0.128s 00:05:11.628 ************************************ 00:05:11.628 END TEST accel_copy_crc32c_C2 00:05:11.628 ************************************ 00:05:11.628 18:03:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.628 18:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:11.628 18:03:29 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:11.628 18:03:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:11.628 18:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.628 18:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:11.628 ************************************ 00:05:11.628 START TEST accel_dualcast 00:05:11.628 ************************************ 00:05:11.628 18:03:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:11.628 18:03:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.628 18:03:29 -- accel/accel.sh@17 -- # local accel_module 00:05:11.628 18:03:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:11.628 18:03:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:11.628 18:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.628 18:03:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.628 18:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.628 18:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.628 18:03:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.628 18:03:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.628 18:03:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.628 18:03:29 -- accel/accel.sh@42 -- # jq -r . 00:05:11.628 [2024-11-18 18:03:30.013886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.628 [2024-11-18 18:03:30.013970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56470 ] 00:05:11.628 [2024-11-18 18:03:30.148860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.628 [2024-11-18 18:03:30.200582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.004 18:03:31 -- accel/accel.sh@18 -- # out=' 00:05:13.004 SPDK Configuration: 00:05:13.004 Core mask: 0x1 00:05:13.004 00:05:13.004 Accel Perf Configuration: 00:05:13.004 Workload Type: dualcast 00:05:13.004 Transfer size: 4096 bytes 00:05:13.004 Vector count 1 00:05:13.004 Module: software 00:05:13.004 Queue depth: 32 00:05:13.004 Allocate depth: 32 00:05:13.004 # threads/core: 1 00:05:13.004 Run time: 1 seconds 00:05:13.004 Verify: Yes 00:05:13.004 00:05:13.004 Running for 1 seconds... 00:05:13.004 00:05:13.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:13.004 ------------------------------------------------------------------------------------ 00:05:13.004 0,0 388608/s 1518 MiB/s 0 0 00:05:13.004 ==================================================================================== 00:05:13.004 Total 388608/s 1518 MiB/s 0 0' 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:13.004 18:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:13.004 18:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.004 18:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.004 18:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.004 18:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.004 18:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.004 18:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.004 18:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.004 18:03:31 -- accel/accel.sh@42 -- # jq -r . 00:05:13.004 [2024-11-18 18:03:31.380357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:13.004 [2024-11-18 18:03:31.380450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56484 ] 00:05:13.004 [2024-11-18 18:03:31.518283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.004 [2024-11-18 18:03:31.570687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val=0x1 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.004 18:03:31 -- accel/accel.sh@21 -- # val=dualcast 00:05:13.004 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.004 18:03:31 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:13.004 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val=software 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val=32 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val=32 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val=1 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val=Yes 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:13.263 18:03:31 -- accel/accel.sh@21 -- # val= 00:05:13.263 18:03:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # IFS=: 00:05:13.263 18:03:31 -- accel/accel.sh@20 -- # read -r var val 00:05:14.198 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.198 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.198 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.198 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.198 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.198 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.199 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.199 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.199 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.199 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.199 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.199 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.199 18:03:32 -- accel/accel.sh@21 -- # val= 00:05:14.199 18:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # IFS=: 00:05:14.199 18:03:32 -- accel/accel.sh@20 -- # read -r var val 00:05:14.199 18:03:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:14.199 18:03:32 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:14.199 18:03:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.199 00:05:14.199 real 0m2.734s 00:05:14.199 user 0m2.387s 00:05:14.199 sys 0m0.146s 00:05:14.199 18:03:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.199 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.199 ************************************ 00:05:14.199 END TEST accel_dualcast 00:05:14.199 ************************************ 00:05:14.199 18:03:32 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:14.199 18:03:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:14.199 18:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.199 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.199 ************************************ 00:05:14.199 START TEST accel_compare 00:05:14.199 ************************************ 00:05:14.199 18:03:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:14.199 18:03:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.199 18:03:32 -- accel/accel.sh@17 -- # local accel_module 00:05:14.199 18:03:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:14.199 18:03:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:14.199 18:03:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.199 18:03:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.199 18:03:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.199 18:03:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.199 18:03:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.199 18:03:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.199 18:03:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.199 18:03:32 -- accel/accel.sh@42 -- # jq -r . 00:05:14.457 [2024-11-18 18:03:32.805392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.457 [2024-11-18 18:03:32.805473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56518 ] 00:05:14.457 [2024-11-18 18:03:32.935013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.457 [2024-11-18 18:03:32.981948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.890 18:03:34 -- accel/accel.sh@18 -- # out=' 00:05:15.890 SPDK Configuration: 00:05:15.890 Core mask: 0x1 00:05:15.890 00:05:15.890 Accel Perf Configuration: 00:05:15.891 Workload Type: compare 00:05:15.891 Transfer size: 4096 bytes 00:05:15.891 Vector count 1 00:05:15.891 Module: software 00:05:15.891 Queue depth: 32 00:05:15.891 Allocate depth: 32 00:05:15.891 # threads/core: 1 00:05:15.891 Run time: 1 seconds 00:05:15.891 Verify: Yes 00:05:15.891 00:05:15.891 Running for 1 seconds... 00:05:15.891 00:05:15.891 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:15.891 ------------------------------------------------------------------------------------ 00:05:15.891 0,0 515136/s 2012 MiB/s 0 0 00:05:15.891 ==================================================================================== 00:05:15.891 Total 515136/s 2012 MiB/s 0 0' 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:15.891 18:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.891 18:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.891 18:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.891 18:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.891 18:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.891 18:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.891 18:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.891 18:03:34 -- accel/accel.sh@42 -- # jq -r . 00:05:15.891 [2024-11-18 18:03:34.163806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.891 [2024-11-18 18:03:34.164438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56538 ] 00:05:15.891 [2024-11-18 18:03:34.303592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.891 [2024-11-18 18:03:34.351619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=0x1 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=compare 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=software 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=32 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=32 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=1 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val=Yes 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:15.891 18:03:34 -- accel/accel.sh@21 -- # val= 00:05:15.891 18:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # IFS=: 00:05:15.891 18:03:34 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@21 -- # val= 00:05:17.268 18:03:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # IFS=: 00:05:17.268 ************************************ 00:05:17.268 END TEST accel_compare 00:05:17.268 ************************************ 00:05:17.268 18:03:35 -- accel/accel.sh@20 -- # read -r var val 00:05:17.268 18:03:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:17.268 18:03:35 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:17.268 18:03:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.268 00:05:17.268 real 0m2.727s 00:05:17.268 user 0m2.395s 00:05:17.268 sys 0m0.128s 00:05:17.268 18:03:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.268 18:03:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.268 18:03:35 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:17.268 18:03:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:17.268 18:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.268 18:03:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.268 ************************************ 00:05:17.268 START TEST accel_xor 00:05:17.268 ************************************ 00:05:17.268 18:03:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:17.268 18:03:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.268 18:03:35 -- accel/accel.sh@17 -- # local accel_module 00:05:17.268 18:03:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:17.268 18:03:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:17.268 18:03:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.268 18:03:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.268 18:03:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.268 18:03:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.268 18:03:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.268 18:03:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.268 18:03:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.268 18:03:35 -- accel/accel.sh@42 -- # jq -r . 00:05:17.268 [2024-11-18 18:03:35.590826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.268 [2024-11-18 18:03:35.591108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56567 ] 00:05:17.268 [2024-11-18 18:03:35.727351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.268 [2024-11-18 18:03:35.777772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.644 18:03:36 -- accel/accel.sh@18 -- # out=' 00:05:18.644 SPDK Configuration: 00:05:18.644 Core mask: 0x1 00:05:18.644 00:05:18.644 Accel Perf Configuration: 00:05:18.644 Workload Type: xor 00:05:18.644 Source buffers: 2 00:05:18.644 Transfer size: 4096 bytes 00:05:18.644 Vector count 1 00:05:18.644 Module: software 00:05:18.644 Queue depth: 32 00:05:18.644 Allocate depth: 32 00:05:18.644 # threads/core: 1 00:05:18.644 Run time: 1 seconds 00:05:18.644 Verify: Yes 00:05:18.644 00:05:18.644 Running for 1 seconds... 00:05:18.644 00:05:18.644 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:18.644 ------------------------------------------------------------------------------------ 00:05:18.644 0,0 270688/s 1057 MiB/s 0 0 00:05:18.644 ==================================================================================== 00:05:18.644 Total 270688/s 1057 MiB/s 0 0' 00:05:18.644 18:03:36 -- accel/accel.sh@20 -- # IFS=: 00:05:18.644 18:03:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:18.644 18:03:36 -- accel/accel.sh@20 -- # read -r var val 00:05:18.644 18:03:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:18.644 18:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.644 18:03:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.644 18:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.645 18:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.645 18:03:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.645 18:03:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.645 18:03:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.645 18:03:36 -- accel/accel.sh@42 -- # jq -r . 00:05:18.645 [2024-11-18 18:03:36.958999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.645 [2024-11-18 18:03:36.959087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56586 ] 00:05:18.645 [2024-11-18 18:03:37.095435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.645 [2024-11-18 18:03:37.146419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=0x1 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=xor 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=2 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=software 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=32 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=32 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=1 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val=Yes 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:18.645 18:03:37 -- accel/accel.sh@21 -- # val= 00:05:18.645 18:03:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # IFS=: 00:05:18.645 18:03:37 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@21 -- # val= 00:05:20.022 18:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # IFS=: 00:05:20.022 ************************************ 00:05:20.022 END TEST accel_xor 00:05:20.022 ************************************ 00:05:20.022 18:03:38 -- accel/accel.sh@20 -- # read -r var val 00:05:20.022 18:03:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:20.022 18:03:38 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:20.022 18:03:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.022 00:05:20.022 real 0m2.739s 00:05:20.022 user 0m2.397s 00:05:20.022 sys 0m0.138s 00:05:20.022 18:03:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.022 18:03:38 -- common/autotest_common.sh@10 -- # set +x 00:05:20.022 18:03:38 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:20.022 18:03:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:20.022 18:03:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.022 18:03:38 -- common/autotest_common.sh@10 -- # set +x 00:05:20.022 ************************************ 00:05:20.022 START TEST accel_xor 00:05:20.022 ************************************ 00:05:20.022 18:03:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:05:20.022 18:03:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.022 18:03:38 -- accel/accel.sh@17 -- # local accel_module 00:05:20.022 18:03:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:20.022 18:03:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:20.022 18:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.022 18:03:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.022 18:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.022 18:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.022 18:03:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.022 18:03:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.022 18:03:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.022 18:03:38 -- accel/accel.sh@42 -- # jq -r . 00:05:20.022 [2024-11-18 18:03:38.382649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.022 [2024-11-18 18:03:38.382747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56621 ] 00:05:20.022 [2024-11-18 18:03:38.518546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.022 [2024-11-18 18:03:38.565877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.399 18:03:39 -- accel/accel.sh@18 -- # out=' 00:05:21.399 SPDK Configuration: 00:05:21.399 Core mask: 0x1 00:05:21.399 00:05:21.399 Accel Perf Configuration: 00:05:21.399 Workload Type: xor 00:05:21.399 Source buffers: 3 00:05:21.399 Transfer size: 4096 bytes 00:05:21.399 Vector count 1 00:05:21.399 Module: software 00:05:21.399 Queue depth: 32 00:05:21.399 Allocate depth: 32 00:05:21.399 # threads/core: 1 00:05:21.399 Run time: 1 seconds 00:05:21.399 Verify: Yes 00:05:21.399 00:05:21.399 Running for 1 seconds... 00:05:21.399 00:05:21.399 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.399 ------------------------------------------------------------------------------------ 00:05:21.399 0,0 270368/s 1056 MiB/s 0 0 00:05:21.399 ==================================================================================== 00:05:21.399 Total 270368/s 1056 MiB/s 0 0' 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:21.399 18:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.399 18:03:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.399 18:03:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.399 18:03:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.399 18:03:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.399 18:03:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.399 18:03:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.399 18:03:39 -- accel/accel.sh@42 -- # jq -r . 00:05:21.399 [2024-11-18 18:03:39.746248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.399 [2024-11-18 18:03:39.746520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56635 ] 00:05:21.399 [2024-11-18 18:03:39.883227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.399 [2024-11-18 18:03:39.932466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=0x1 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=xor 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=3 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=software 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=32 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=32 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=1 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val=Yes 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:21.399 18:03:39 -- accel/accel.sh@21 -- # val= 00:05:21.399 18:03:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # IFS=: 00:05:21.399 18:03:39 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 ************************************ 00:05:22.776 END TEST accel_xor 00:05:22.776 ************************************ 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@21 -- # val= 00:05:22.776 18:03:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # IFS=: 00:05:22.776 18:03:41 -- accel/accel.sh@20 -- # read -r var val 00:05:22.776 18:03:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.777 18:03:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:22.777 18:03:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.777 00:05:22.777 real 0m2.716s 00:05:22.777 user 0m2.384s 00:05:22.777 sys 0m0.131s 00:05:22.777 18:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.777 18:03:41 -- common/autotest_common.sh@10 -- # set +x 00:05:22.777 18:03:41 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:22.777 18:03:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:22.777 18:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.777 18:03:41 -- common/autotest_common.sh@10 -- # set +x 00:05:22.777 ************************************ 00:05:22.777 START TEST accel_dif_verify 00:05:22.777 ************************************ 00:05:22.777 18:03:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:05:22.777 18:03:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.777 18:03:41 -- accel/accel.sh@17 -- # local accel_module 00:05:22.777 18:03:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:22.777 18:03:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:22.777 18:03:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.777 18:03:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.777 18:03:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.777 18:03:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.777 18:03:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.777 18:03:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.777 18:03:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.777 18:03:41 -- accel/accel.sh@42 -- # jq -r . 00:05:22.777 [2024-11-18 18:03:41.156811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.777 [2024-11-18 18:03:41.156897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56669 ] 00:05:22.777 [2024-11-18 18:03:41.293018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.777 [2024-11-18 18:03:41.341554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.178 18:03:42 -- accel/accel.sh@18 -- # out=' 00:05:24.178 SPDK Configuration: 00:05:24.178 Core mask: 0x1 00:05:24.178 00:05:24.178 Accel Perf Configuration: 00:05:24.178 Workload Type: dif_verify 00:05:24.178 Vector size: 4096 bytes 00:05:24.178 Transfer size: 4096 bytes 00:05:24.179 Block size: 512 bytes 00:05:24.179 Metadata size: 8 bytes 00:05:24.179 Vector count 1 00:05:24.179 Module: software 00:05:24.179 Queue depth: 32 00:05:24.179 Allocate depth: 32 00:05:24.179 # threads/core: 1 00:05:24.179 Run time: 1 seconds 00:05:24.179 Verify: No 00:05:24.179 00:05:24.179 Running for 1 seconds... 00:05:24.179 00:05:24.179 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:24.179 ------------------------------------------------------------------------------------ 00:05:24.179 0,0 116480/s 462 MiB/s 0 0 00:05:24.179 ==================================================================================== 00:05:24.179 Total 116480/s 455 MiB/s 0 0' 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:24.179 18:03:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.179 18:03:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.179 18:03:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.179 18:03:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.179 18:03:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.179 18:03:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.179 18:03:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.179 18:03:42 -- accel/accel.sh@42 -- # jq -r . 00:05:24.179 [2024-11-18 18:03:42.513426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.179 [2024-11-18 18:03:42.513939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56689 ] 00:05:24.179 [2024-11-18 18:03:42.650185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.179 [2024-11-18 18:03:42.696314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=0x1 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=dif_verify 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=software 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@23 -- # accel_module=software 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=32 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=32 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=1 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val=No 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:24.179 18:03:42 -- accel/accel.sh@21 -- # val= 00:05:24.179 18:03:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # IFS=: 00:05:24.179 18:03:42 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 18:03:43 -- accel/accel.sh@21 -- # val= 00:05:25.558 18:03:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # IFS=: 00:05:25.558 18:03:43 -- accel/accel.sh@20 -- # read -r var val 00:05:25.558 ************************************ 00:05:25.558 END TEST accel_dif_verify 00:05:25.558 ************************************ 00:05:25.558 18:03:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.558 18:03:43 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:25.558 18:03:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.558 00:05:25.558 real 0m2.714s 00:05:25.558 user 0m2.378s 00:05:25.558 sys 0m0.137s 00:05:25.558 18:03:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.558 18:03:43 -- common/autotest_common.sh@10 -- # set +x 00:05:25.558 18:03:43 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:25.558 18:03:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:25.558 18:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.558 18:03:43 -- common/autotest_common.sh@10 -- # set +x 00:05:25.558 ************************************ 00:05:25.558 START TEST accel_dif_generate 00:05:25.558 ************************************ 00:05:25.558 18:03:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:05:25.558 18:03:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.558 18:03:43 -- accel/accel.sh@17 -- # local accel_module 00:05:25.558 18:03:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:25.558 18:03:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:25.558 18:03:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.558 18:03:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.558 18:03:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.558 18:03:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.558 18:03:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.558 18:03:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.558 18:03:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.558 18:03:43 -- accel/accel.sh@42 -- # jq -r . 00:05:25.558 [2024-11-18 18:03:43.922920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.558 [2024-11-18 18:03:43.923001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56718 ] 00:05:25.558 [2024-11-18 18:03:44.051260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.558 [2024-11-18 18:03:44.104981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.937 18:03:45 -- accel/accel.sh@18 -- # out=' 00:05:26.937 SPDK Configuration: 00:05:26.937 Core mask: 0x1 00:05:26.937 00:05:26.937 Accel Perf Configuration: 00:05:26.937 Workload Type: dif_generate 00:05:26.937 Vector size: 4096 bytes 00:05:26.937 Transfer size: 4096 bytes 00:05:26.937 Block size: 512 bytes 00:05:26.937 Metadata size: 8 bytes 00:05:26.937 Vector count 1 00:05:26.937 Module: software 00:05:26.937 Queue depth: 32 00:05:26.937 Allocate depth: 32 00:05:26.937 # threads/core: 1 00:05:26.937 Run time: 1 seconds 00:05:26.937 Verify: No 00:05:26.937 00:05:26.937 Running for 1 seconds... 00:05:26.937 00:05:26.937 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:26.937 ------------------------------------------------------------------------------------ 00:05:26.938 0,0 142528/s 565 MiB/s 0 0 00:05:26.938 ==================================================================================== 00:05:26.938 Total 142528/s 556 MiB/s 0 0' 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:26.938 18:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.938 18:03:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:26.938 18:03:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.938 18:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.938 18:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.938 18:03:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.938 18:03:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.938 18:03:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.938 18:03:45 -- accel/accel.sh@42 -- # jq -r . 00:05:26.938 [2024-11-18 18:03:45.280019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.938 [2024-11-18 18:03:45.280109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56743 ] 00:05:26.938 [2024-11-18 18:03:45.416140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.938 [2024-11-18 18:03:45.468397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=0x1 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=dif_generate 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=software 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@23 -- # accel_module=software 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=32 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=32 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=1 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val=No 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:26.938 18:03:45 -- accel/accel.sh@21 -- # val= 00:05:26.938 18:03:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # IFS=: 00:05:26.938 18:03:45 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@21 -- # val= 00:05:28.317 18:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # IFS=: 00:05:28.317 18:03:46 -- accel/accel.sh@20 -- # read -r var val 00:05:28.317 18:03:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.317 18:03:46 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:28.317 18:03:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.317 00:05:28.317 real 0m2.716s 00:05:28.317 user 0m2.380s 00:05:28.317 sys 0m0.139s 00:05:28.317 18:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.317 18:03:46 -- common/autotest_common.sh@10 -- # set +x 00:05:28.317 ************************************ 00:05:28.317 END TEST accel_dif_generate 00:05:28.317 ************************************ 00:05:28.317 18:03:46 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:28.317 18:03:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:28.317 18:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.317 18:03:46 -- common/autotest_common.sh@10 -- # set +x 00:05:28.317 ************************************ 00:05:28.317 START TEST accel_dif_generate_copy 00:05:28.317 ************************************ 00:05:28.317 18:03:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:05:28.317 18:03:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.317 18:03:46 -- accel/accel.sh@17 -- # local accel_module 00:05:28.317 18:03:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:28.317 18:03:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:28.317 18:03:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.317 18:03:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.317 18:03:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.317 18:03:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.317 18:03:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.317 18:03:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.317 18:03:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.317 18:03:46 -- accel/accel.sh@42 -- # jq -r . 00:05:28.317 [2024-11-18 18:03:46.697881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.317 [2024-11-18 18:03:46.698159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56772 ] 00:05:28.317 [2024-11-18 18:03:46.834877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.317 [2024-11-18 18:03:46.881913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.697 18:03:48 -- accel/accel.sh@18 -- # out=' 00:05:29.697 SPDK Configuration: 00:05:29.697 Core mask: 0x1 00:05:29.697 00:05:29.697 Accel Perf Configuration: 00:05:29.697 Workload Type: dif_generate_copy 00:05:29.697 Vector size: 4096 bytes 00:05:29.697 Transfer size: 4096 bytes 00:05:29.697 Vector count 1 00:05:29.697 Module: software 00:05:29.697 Queue depth: 32 00:05:29.697 Allocate depth: 32 00:05:29.697 # threads/core: 1 00:05:29.697 Run time: 1 seconds 00:05:29.697 Verify: No 00:05:29.697 00:05:29.697 Running for 1 seconds... 00:05:29.697 00:05:29.697 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.697 ------------------------------------------------------------------------------------ 00:05:29.697 0,0 109600/s 434 MiB/s 0 0 00:05:29.697 ==================================================================================== 00:05:29.697 Total 109600/s 428 MiB/s 0 0' 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:29.697 18:03:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.697 18:03:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.697 18:03:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.697 18:03:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.697 18:03:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.697 18:03:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.697 18:03:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.697 18:03:48 -- accel/accel.sh@42 -- # jq -r . 00:05:29.697 [2024-11-18 18:03:48.049870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.697 [2024-11-18 18:03:48.049969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56786 ] 00:05:29.697 [2024-11-18 18:03:48.182062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.697 [2024-11-18 18:03:48.228704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=0x1 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=software 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=32 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=32 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=1 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val=No 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:29.697 18:03:48 -- accel/accel.sh@21 -- # val= 00:05:29.697 18:03:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # IFS=: 00:05:29.697 18:03:48 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 ************************************ 00:05:31.075 END TEST accel_dif_generate_copy 00:05:31.075 ************************************ 00:05:31.075 18:03:49 -- accel/accel.sh@21 -- # val= 00:05:31.075 18:03:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # IFS=: 00:05:31.075 18:03:49 -- accel/accel.sh@20 -- # read -r var val 00:05:31.075 18:03:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.075 18:03:49 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:31.075 18:03:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.075 00:05:31.075 real 0m2.716s 00:05:31.075 user 0m2.376s 00:05:31.075 sys 0m0.138s 00:05:31.075 18:03:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.075 18:03:49 -- common/autotest_common.sh@10 -- # set +x 00:05:31.075 18:03:49 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:31.075 18:03:49 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.075 18:03:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:31.075 18:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.075 18:03:49 -- common/autotest_common.sh@10 -- # set +x 00:05:31.075 ************************************ 00:05:31.075 START TEST accel_comp 00:05:31.075 ************************************ 00:05:31.075 18:03:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.075 18:03:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.075 18:03:49 -- accel/accel.sh@17 -- # local accel_module 00:05:31.075 18:03:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.075 18:03:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.075 18:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.075 18:03:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.075 18:03:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.075 18:03:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.075 18:03:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.075 18:03:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.075 18:03:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.075 18:03:49 -- accel/accel.sh@42 -- # jq -r . 00:05:31.075 [2024-11-18 18:03:49.460811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.075 [2024-11-18 18:03:49.460907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56826 ] 00:05:31.075 [2024-11-18 18:03:49.597832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.075 [2024-11-18 18:03:49.645580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.452 18:03:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:32.452 00:05:32.452 SPDK Configuration: 00:05:32.452 Core mask: 0x1 00:05:32.452 00:05:32.452 Accel Perf Configuration: 00:05:32.452 Workload Type: compress 00:05:32.452 Transfer size: 4096 bytes 00:05:32.452 Vector count 1 00:05:32.452 Module: software 00:05:32.452 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.452 Queue depth: 32 00:05:32.452 Allocate depth: 32 00:05:32.452 # threads/core: 1 00:05:32.452 Run time: 1 seconds 00:05:32.452 Verify: No 00:05:32.452 00:05:32.452 Running for 1 seconds... 00:05:32.452 00:05:32.452 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:32.452 ------------------------------------------------------------------------------------ 00:05:32.452 0,0 55424/s 231 MiB/s 0 0 00:05:32.452 ==================================================================================== 00:05:32.452 Total 55424/s 216 MiB/s 0 0' 00:05:32.452 18:03:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.452 18:03:50 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:50 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.452 18:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.452 18:03:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.452 18:03:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.452 18:03:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.452 18:03:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.452 18:03:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.452 18:03:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.452 18:03:50 -- accel/accel.sh@42 -- # jq -r . 00:05:32.452 [2024-11-18 18:03:50.808958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.452 [2024-11-18 18:03:50.809181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56840 ] 00:05:32.452 [2024-11-18 18:03:50.937257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.452 [2024-11-18 18:03:50.984346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val=0x1 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val=compress 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.452 18:03:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:32.452 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.452 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=software 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@23 -- # accel_module=software 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=32 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=32 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=1 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val=No 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:32.453 18:03:51 -- accel/accel.sh@21 -- # val= 00:05:32.453 18:03:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # IFS=: 00:05:32.453 18:03:51 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@21 -- # val= 00:05:33.829 18:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # IFS=: 00:05:33.829 18:03:52 -- accel/accel.sh@20 -- # read -r var val 00:05:33.829 18:03:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:33.829 18:03:52 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:33.829 18:03:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.829 00:05:33.829 real 0m2.701s 00:05:33.829 user 0m2.372s 00:05:33.829 sys 0m0.129s 00:05:33.829 18:03:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.829 18:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.829 ************************************ 00:05:33.829 END TEST accel_comp 00:05:33.829 ************************************ 00:05:33.829 18:03:52 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.830 18:03:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:33.830 18:03:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.830 18:03:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.830 ************************************ 00:05:33.830 START TEST accel_decomp 00:05:33.830 ************************************ 00:05:33.830 18:03:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.830 18:03:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.830 18:03:52 -- accel/accel.sh@17 -- # local accel_module 00:05:33.830 18:03:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.830 18:03:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.830 18:03:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.830 18:03:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.830 18:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.830 18:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.830 18:03:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.830 18:03:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.830 18:03:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.830 18:03:52 -- accel/accel.sh@42 -- # jq -r . 00:05:33.830 [2024-11-18 18:03:52.212581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.830 [2024-11-18 18:03:52.212667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56869 ] 00:05:33.830 [2024-11-18 18:03:52.349592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.830 [2024-11-18 18:03:52.401654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.208 18:03:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:35.208 00:05:35.208 SPDK Configuration: 00:05:35.208 Core mask: 0x1 00:05:35.208 00:05:35.208 Accel Perf Configuration: 00:05:35.208 Workload Type: decompress 00:05:35.208 Transfer size: 4096 bytes 00:05:35.208 Vector count 1 00:05:35.208 Module: software 00:05:35.208 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:35.208 Queue depth: 32 00:05:35.208 Allocate depth: 32 00:05:35.208 # threads/core: 1 00:05:35.208 Run time: 1 seconds 00:05:35.208 Verify: Yes 00:05:35.208 00:05:35.208 Running for 1 seconds... 00:05:35.208 00:05:35.208 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:35.208 ------------------------------------------------------------------------------------ 00:05:35.208 0,0 79808/s 147 MiB/s 0 0 00:05:35.208 ==================================================================================== 00:05:35.208 Total 79808/s 311 MiB/s 0 0' 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.208 18:03:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.208 18:03:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.208 18:03:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.208 18:03:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.208 18:03:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.208 18:03:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.208 18:03:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.208 18:03:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.208 18:03:53 -- accel/accel.sh@42 -- # jq -r . 00:05:35.208 [2024-11-18 18:03:53.585499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.208 [2024-11-18 18:03:53.585818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56893 ] 00:05:35.208 [2024-11-18 18:03:53.719589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.208 [2024-11-18 18:03:53.767039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val=0x1 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val=decompress 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.208 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.208 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.208 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=software 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=32 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=32 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=1 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val=Yes 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:35.467 18:03:53 -- accel/accel.sh@21 -- # val= 00:05:35.467 18:03:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # IFS=: 00:05:35.467 18:03:53 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@21 -- # val= 00:05:36.405 18:03:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # IFS=: 00:05:36.405 18:03:54 -- accel/accel.sh@20 -- # read -r var val 00:05:36.405 18:03:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.405 18:03:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:36.405 18:03:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.405 00:05:36.405 real 0m2.732s 00:05:36.405 user 0m2.389s 00:05:36.405 sys 0m0.143s 00:05:36.405 18:03:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.405 18:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:36.405 ************************************ 00:05:36.405 END TEST accel_decomp 00:05:36.405 ************************************ 00:05:36.405 18:03:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:36.405 18:03:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:36.405 18:03:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.405 18:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:36.405 ************************************ 00:05:36.405 START TEST accel_decmop_full 00:05:36.405 ************************************ 00:05:36.405 18:03:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:36.405 18:03:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.405 18:03:54 -- accel/accel.sh@17 -- # local accel_module 00:05:36.405 18:03:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:36.405 18:03:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.405 18:03:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:36.405 18:03:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.405 18:03:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.405 18:03:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.405 18:03:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.405 18:03:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.405 18:03:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.405 18:03:54 -- accel/accel.sh@42 -- # jq -r . 00:05:36.405 [2024-11-18 18:03:54.997775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.405 [2024-11-18 18:03:54.998037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56923 ] 00:05:36.665 [2024-11-18 18:03:55.133621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.665 [2024-11-18 18:03:55.180490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.044 18:03:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:38.044 00:05:38.044 SPDK Configuration: 00:05:38.044 Core mask: 0x1 00:05:38.044 00:05:38.044 Accel Perf Configuration: 00:05:38.044 Workload Type: decompress 00:05:38.044 Transfer size: 111250 bytes 00:05:38.044 Vector count 1 00:05:38.044 Module: software 00:05:38.044 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.044 Queue depth: 32 00:05:38.044 Allocate depth: 32 00:05:38.044 # threads/core: 1 00:05:38.044 Run time: 1 seconds 00:05:38.044 Verify: Yes 00:05:38.044 00:05:38.044 Running for 1 seconds... 00:05:38.044 00:05:38.044 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:38.044 ------------------------------------------------------------------------------------ 00:05:38.044 0,0 5376/s 222 MiB/s 0 0 00:05:38.044 ==================================================================================== 00:05:38.044 Total 5376/s 570 MiB/s 0 0' 00:05:38.044 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.044 18:03:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:38.044 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.044 18:03:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:38.044 18:03:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.044 18:03:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.044 18:03:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.044 18:03:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.044 18:03:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.044 18:03:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.044 18:03:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.044 18:03:56 -- accel/accel.sh@42 -- # jq -r . 00:05:38.044 [2024-11-18 18:03:56.363807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.044 [2024-11-18 18:03:56.364072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:05:38.044 [2024-11-18 18:03:56.500644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.044 [2024-11-18 18:03:56.549601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.044 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=0x1 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=decompress 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=software 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=32 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=32 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=1 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val=Yes 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:38.045 18:03:56 -- accel/accel.sh@21 -- # val= 00:05:38.045 18:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # IFS=: 00:05:38.045 18:03:56 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@21 -- # val= 00:05:39.424 18:03:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # IFS=: 00:05:39.424 18:03:57 -- accel/accel.sh@20 -- # read -r var val 00:05:39.424 18:03:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:39.424 18:03:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:39.424 18:03:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.424 00:05:39.424 real 0m2.743s 00:05:39.424 user 0m2.397s 00:05:39.424 sys 0m0.142s 00:05:39.424 18:03:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.424 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.424 ************************************ 00:05:39.424 END TEST accel_decmop_full 00:05:39.424 ************************************ 00:05:39.424 18:03:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:39.424 18:03:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:39.424 18:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.424 18:03:57 -- common/autotest_common.sh@10 -- # set +x 00:05:39.424 ************************************ 00:05:39.424 START TEST accel_decomp_mcore 00:05:39.424 ************************************ 00:05:39.424 18:03:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:39.424 18:03:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.424 18:03:57 -- accel/accel.sh@17 -- # local accel_module 00:05:39.424 18:03:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:39.424 18:03:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:39.424 18:03:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.424 18:03:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.424 18:03:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.424 18:03:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.424 18:03:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.424 18:03:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.424 18:03:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.424 18:03:57 -- accel/accel.sh@42 -- # jq -r . 00:05:39.424 [2024-11-18 18:03:57.786573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.424 [2024-11-18 18:03:57.786659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56976 ] 00:05:39.424 [2024-11-18 18:03:57.922910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.424 [2024-11-18 18:03:57.971242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.424 [2024-11-18 18:03:57.971381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.424 [2024-11-18 18:03:57.971657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.424 [2024-11-18 18:03:57.971495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.806 18:03:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:40.806 00:05:40.806 SPDK Configuration: 00:05:40.806 Core mask: 0xf 00:05:40.806 00:05:40.806 Accel Perf Configuration: 00:05:40.806 Workload Type: decompress 00:05:40.806 Transfer size: 4096 bytes 00:05:40.806 Vector count 1 00:05:40.806 Module: software 00:05:40.806 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:40.806 Queue depth: 32 00:05:40.806 Allocate depth: 32 00:05:40.806 # threads/core: 1 00:05:40.806 Run time: 1 seconds 00:05:40.806 Verify: Yes 00:05:40.806 00:05:40.806 Running for 1 seconds... 00:05:40.806 00:05:40.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.806 ------------------------------------------------------------------------------------ 00:05:40.806 0,0 65760/s 121 MiB/s 0 0 00:05:40.806 3,0 61216/s 112 MiB/s 0 0 00:05:40.806 2,0 63360/s 116 MiB/s 0 0 00:05:40.806 1,0 61120/s 112 MiB/s 0 0 00:05:40.806 ==================================================================================== 00:05:40.806 Total 251456/s 982 MiB/s 0 0' 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:40.806 18:03:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.806 18:03:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.806 18:03:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.806 18:03:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.806 18:03:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.806 18:03:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.806 18:03:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.806 18:03:59 -- accel/accel.sh@42 -- # jq -r . 00:05:40.806 [2024-11-18 18:03:59.146395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.806 [2024-11-18 18:03:59.146716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56994 ] 00:05:40.806 [2024-11-18 18:03:59.277217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.806 [2024-11-18 18:03:59.326663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.806 [2024-11-18 18:03:59.326797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.806 [2024-11-18 18:03:59.326895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.806 [2024-11-18 18:03:59.327098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=0xf 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=decompress 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=software 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=32 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=32 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=1 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val=Yes 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.806 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:40.806 18:03:59 -- accel/accel.sh@21 -- # val= 00:05:40.806 18:03:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.807 18:03:59 -- accel/accel.sh@20 -- # IFS=: 00:05:40.807 18:03:59 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@21 -- # val= 00:05:42.186 18:04:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # IFS=: 00:05:42.186 18:04:00 -- accel/accel.sh@20 -- # read -r var val 00:05:42.186 18:04:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.186 18:04:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:42.186 18:04:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.186 00:05:42.186 real 0m2.726s 00:05:42.186 user 0m8.771s 00:05:42.186 sys 0m0.149s 00:05:42.186 18:04:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.186 18:04:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.186 ************************************ 00:05:42.186 END TEST accel_decomp_mcore 00:05:42.186 ************************************ 00:05:42.186 18:04:00 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:42.186 18:04:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:42.186 18:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.186 18:04:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.186 ************************************ 00:05:42.186 START TEST accel_decomp_full_mcore 00:05:42.186 ************************************ 00:05:42.186 18:04:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:42.186 18:04:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.186 18:04:00 -- accel/accel.sh@17 -- # local accel_module 00:05:42.186 18:04:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:42.186 18:04:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:42.186 18:04:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.186 18:04:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.186 18:04:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.186 18:04:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.186 18:04:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.186 18:04:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.186 18:04:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.186 18:04:00 -- accel/accel.sh@42 -- # jq -r . 00:05:42.186 [2024-11-18 18:04:00.566729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.186 [2024-11-18 18:04:00.566815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57032 ] 00:05:42.186 [2024-11-18 18:04:00.700451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.186 [2024-11-18 18:04:00.755853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.186 [2024-11-18 18:04:00.755995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.186 [2024-11-18 18:04:00.756121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.186 [2024-11-18 18:04:00.756360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.563 18:04:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:43.563 00:05:43.563 SPDK Configuration: 00:05:43.563 Core mask: 0xf 00:05:43.563 00:05:43.563 Accel Perf Configuration: 00:05:43.563 Workload Type: decompress 00:05:43.563 Transfer size: 111250 bytes 00:05:43.563 Vector count 1 00:05:43.563 Module: software 00:05:43.563 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.563 Queue depth: 32 00:05:43.563 Allocate depth: 32 00:05:43.563 # threads/core: 1 00:05:43.563 Run time: 1 seconds 00:05:43.563 Verify: Yes 00:05:43.563 00:05:43.563 Running for 1 seconds... 00:05:43.563 00:05:43.563 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.563 ------------------------------------------------------------------------------------ 00:05:43.563 0,0 4928/s 203 MiB/s 0 0 00:05:43.563 3,0 4896/s 202 MiB/s 0 0 00:05:43.563 2,0 4928/s 203 MiB/s 0 0 00:05:43.563 1,0 4928/s 203 MiB/s 0 0 00:05:43.563 ==================================================================================== 00:05:43.563 Total 19680/s 2087 MiB/s 0 0' 00:05:43.563 18:04:01 -- accel/accel.sh@20 -- # IFS=: 00:05:43.563 18:04:01 -- accel/accel.sh@20 -- # read -r var val 00:05:43.563 18:04:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.563 18:04:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:43.563 18:04:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.563 18:04:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.563 18:04:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.563 18:04:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.563 18:04:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.563 18:04:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.563 18:04:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.563 18:04:01 -- accel/accel.sh@42 -- # jq -r . 00:05:43.563 [2024-11-18 18:04:01.954643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.563 [2024-11-18 18:04:01.954738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57049 ] 00:05:43.563 [2024-11-18 18:04:02.090465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.563 [2024-11-18 18:04:02.138758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.563 [2024-11-18 18:04:02.138922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.563 [2024-11-18 18:04:02.139052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.563 [2024-11-18 18:04:02.139312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val=0xf 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.823 18:04:02 -- accel/accel.sh@21 -- # val=decompress 00:05:43.823 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.823 18:04:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.823 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=software 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=32 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=32 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=1 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val=Yes 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:43.824 18:04:02 -- accel/accel.sh@21 -- # val= 00:05:43.824 18:04:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # IFS=: 00:05:43.824 18:04:02 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.762 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.762 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.762 18:04:03 -- accel/accel.sh@21 -- # val= 00:05:44.763 18:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.763 18:04:03 -- accel/accel.sh@20 -- # IFS=: 00:05:44.763 18:04:03 -- accel/accel.sh@20 -- # read -r var val 00:05:44.763 18:04:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.763 ************************************ 00:05:44.763 END TEST accel_decomp_full_mcore 00:05:44.763 ************************************ 00:05:44.763 18:04:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:44.763 18:04:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.763 00:05:44.763 real 0m2.779s 00:05:44.763 user 0m8.891s 00:05:44.763 sys 0m0.156s 00:05:44.763 18:04:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.763 18:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.763 18:04:03 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:44.763 18:04:03 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:44.763 18:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.763 18:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.022 ************************************ 00:05:45.022 START TEST accel_decomp_mthread 00:05:45.022 ************************************ 00:05:45.022 18:04:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.022 18:04:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.022 18:04:03 -- accel/accel.sh@17 -- # local accel_module 00:05:45.022 18:04:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.022 18:04:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:45.022 18:04:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.022 18:04:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.022 18:04:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.022 18:04:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.022 18:04:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.022 18:04:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.022 18:04:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.022 18:04:03 -- accel/accel.sh@42 -- # jq -r . 00:05:45.022 [2024-11-18 18:04:03.398637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.022 [2024-11-18 18:04:03.398723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57086 ] 00:05:45.022 [2024-11-18 18:04:03.533273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.022 [2024-11-18 18:04:03.579913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.463 18:04:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:46.463 00:05:46.463 SPDK Configuration: 00:05:46.463 Core mask: 0x1 00:05:46.463 00:05:46.463 Accel Perf Configuration: 00:05:46.463 Workload Type: decompress 00:05:46.463 Transfer size: 4096 bytes 00:05:46.463 Vector count 1 00:05:46.463 Module: software 00:05:46.463 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.463 Queue depth: 32 00:05:46.463 Allocate depth: 32 00:05:46.463 # threads/core: 2 00:05:46.463 Run time: 1 seconds 00:05:46.463 Verify: Yes 00:05:46.463 00:05:46.463 Running for 1 seconds... 00:05:46.463 00:05:46.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.463 ------------------------------------------------------------------------------------ 00:05:46.463 0,1 40704/s 75 MiB/s 0 0 00:05:46.463 0,0 40608/s 74 MiB/s 0 0 00:05:46.463 ==================================================================================== 00:05:46.463 Total 81312/s 317 MiB/s 0 0' 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:46.463 18:04:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.463 18:04:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.463 18:04:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.463 18:04:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.463 18:04:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.463 18:04:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.463 18:04:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.463 18:04:04 -- accel/accel.sh@42 -- # jq -r . 00:05:46.463 [2024-11-18 18:04:04.759712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.463 [2024-11-18 18:04:04.759999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57108 ] 00:05:46.463 [2024-11-18 18:04:04.894618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.463 [2024-11-18 18:04:04.944008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=0x1 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=decompress 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=software 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=32 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=32 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=2 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val=Yes 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:46.463 18:04:04 -- accel/accel.sh@21 -- # val= 00:05:46.463 18:04:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # IFS=: 00:05:46.463 18:04:04 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@21 -- # val= 00:05:47.843 18:04:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # IFS=: 00:05:47.843 18:04:06 -- accel/accel.sh@20 -- # read -r var val 00:05:47.843 18:04:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.843 18:04:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:47.843 18:04:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.843 00:05:47.843 real 0m2.730s 00:05:47.843 user 0m2.378s 00:05:47.843 sys 0m0.147s 00:05:47.843 18:04:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.843 ************************************ 00:05:47.843 END TEST accel_decomp_mthread 00:05:47.843 ************************************ 00:05:47.843 18:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.843 18:04:06 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.843 18:04:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:47.843 18:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.843 18:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.843 ************************************ 00:05:47.843 START TEST accel_deomp_full_mthread 00:05:47.844 ************************************ 00:05:47.844 18:04:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.844 18:04:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.844 18:04:06 -- accel/accel.sh@17 -- # local accel_module 00:05:47.844 18:04:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.844 18:04:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.844 18:04:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.844 18:04:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.844 18:04:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.844 18:04:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.844 18:04:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.844 18:04:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.844 18:04:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.844 18:04:06 -- accel/accel.sh@42 -- # jq -r . 00:05:47.844 [2024-11-18 18:04:06.177037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.844 [2024-11-18 18:04:06.177163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:05:47.844 [2024-11-18 18:04:06.313754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.844 [2024-11-18 18:04:06.359986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.222 18:04:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:49.222 00:05:49.222 SPDK Configuration: 00:05:49.222 Core mask: 0x1 00:05:49.222 00:05:49.222 Accel Perf Configuration: 00:05:49.222 Workload Type: decompress 00:05:49.222 Transfer size: 111250 bytes 00:05:49.222 Vector count 1 00:05:49.222 Module: software 00:05:49.222 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.222 Queue depth: 32 00:05:49.222 Allocate depth: 32 00:05:49.222 # threads/core: 2 00:05:49.222 Run time: 1 seconds 00:05:49.222 Verify: Yes 00:05:49.222 00:05:49.222 Running for 1 seconds... 00:05:49.222 00:05:49.222 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.222 ------------------------------------------------------------------------------------ 00:05:49.222 0,1 2752/s 113 MiB/s 0 0 00:05:49.222 0,0 2720/s 112 MiB/s 0 0 00:05:49.222 ==================================================================================== 00:05:49.222 Total 5472/s 580 MiB/s 0 0' 00:05:49.222 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.222 18:04:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:49.222 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.222 18:04:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.222 18:04:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:49.222 18:04:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.222 18:04:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.222 18:04:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.222 18:04:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.222 18:04:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.222 18:04:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.222 18:04:07 -- accel/accel.sh@42 -- # jq -r . 00:05:49.222 [2024-11-18 18:04:07.556185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.222 [2024-11-18 18:04:07.556297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57156 ] 00:05:49.223 [2024-11-18 18:04:07.687102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.223 [2024-11-18 18:04:07.733803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=0x1 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=decompress 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=software 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=32 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=32 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=2 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val=Yes 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:49.223 18:04:07 -- accel/accel.sh@21 -- # val= 00:05:49.223 18:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # IFS=: 00:05:49.223 18:04:07 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@21 -- # val= 00:05:50.602 18:04:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # IFS=: 00:05:50.602 18:04:08 -- accel/accel.sh@20 -- # read -r var val 00:05:50.602 18:04:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.602 18:04:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:50.602 18:04:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.602 00:05:50.602 real 0m2.757s 00:05:50.602 user 0m2.430s 00:05:50.602 sys 0m0.130s 00:05:50.602 18:04:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.602 ************************************ 00:05:50.602 END TEST accel_deomp_full_mthread 00:05:50.602 ************************************ 00:05:50.602 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.602 18:04:08 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:50.602 18:04:08 -- accel/accel.sh@129 -- # build_accel_config 00:05:50.602 18:04:08 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:50.602 18:04:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.602 18:04:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.602 18:04:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:50.602 18:04:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.602 18:04:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.602 18:04:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.602 18:04:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.602 18:04:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.602 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.602 18:04:08 -- accel/accel.sh@42 -- # jq -r . 00:05:50.602 ************************************ 00:05:50.602 START TEST accel_dif_functional_tests 00:05:50.602 ************************************ 00:05:50.602 18:04:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:50.602 [2024-11-18 18:04:09.015275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.603 [2024-11-18 18:04:09.015363] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57192 ] 00:05:50.603 [2024-11-18 18:04:09.151628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.603 [2024-11-18 18:04:09.200999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.603 [2024-11-18 18:04:09.201131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.603 [2024-11-18 18:04:09.201134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.861 00:05:50.861 00:05:50.861 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.861 http://cunit.sourceforge.net/ 00:05:50.861 00:05:50.861 00:05:50.861 Suite: accel_dif 00:05:50.861 Test: verify: DIF generated, GUARD check ...passed 00:05:50.861 Test: verify: DIF generated, APPTAG check ...passed 00:05:50.861 Test: verify: DIF generated, REFTAG check ...passed 00:05:50.861 Test: verify: DIF not generated, GUARD check ...passed 00:05:50.861 Test: verify: DIF not generated, APPTAG check ...[2024-11-18 18:04:09.249963] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:50.861 [2024-11-18 18:04:09.250030] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:50.861 [2024-11-18 18:04:09.250068] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:50.861 passed 00:05:50.861 Test: verify: DIF not generated, REFTAG check ...passed 00:05:50.861 Test: verify: APPTAG correct, APPTAG check ...[2024-11-18 18:04:09.250108] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:50.861 [2024-11-18 18:04:09.250133] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:50.861 [2024-11-18 18:04:09.250157] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:50.861 passed 00:05:50.861 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-18 18:04:09.250331] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:50.861 passed 00:05:50.861 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:50.861 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:50.861 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:50.861 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:50.861 Test: generate copy: DIF generated, GUARD check ...[2024-11-18 18:04:09.250678] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:50.861 passed 00:05:50.861 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:50.861 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:50.861 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:50.861 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:50.861 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:50.861 Test: generate copy: iovecs-len validate ...passed 00:05:50.861 Test: generate copy: buffer alignment validate ...[2024-11-18 18:04:09.251236] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:50.861 passed 00:05:50.861 00:05:50.861 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.861 suites 1 1 n/a 0 0 00:05:50.861 tests 20 20 20 0 0 00:05:50.861 asserts 204 204 204 0 n/a 00:05:50.861 00:05:50.861 Elapsed time = 0.003 seconds 00:05:50.861 ************************************ 00:05:50.861 END TEST accel_dif_functional_tests 00:05:50.861 ************************************ 00:05:50.861 00:05:50.861 real 0m0.442s 00:05:50.861 user 0m0.503s 00:05:50.861 sys 0m0.100s 00:05:50.861 18:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.861 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:50.861 00:05:50.861 real 0m58.703s 00:05:50.861 user 1m4.019s 00:05:50.861 sys 0m4.044s 00:05:50.861 ************************************ 00:05:50.861 END TEST accel 00:05:50.861 ************************************ 00:05:50.861 18:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.861 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.121 18:04:09 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:51.121 18:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.121 18:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.121 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.121 ************************************ 00:05:51.121 START TEST accel_rpc 00:05:51.121 ************************************ 00:05:51.121 18:04:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:51.121 * Looking for test storage... 00:05:51.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:51.121 18:04:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.121 18:04:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.121 18:04:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.121 18:04:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.121 18:04:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.121 18:04:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.121 18:04:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.121 18:04:09 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.121 18:04:09 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.121 18:04:09 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.121 18:04:09 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.121 18:04:09 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.121 18:04:09 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.121 18:04:09 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.121 18:04:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.121 18:04:09 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.121 18:04:09 -- scripts/common.sh@344 -- # : 1 00:05:51.121 18:04:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.121 18:04:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.121 18:04:09 -- scripts/common.sh@364 -- # decimal 1 00:05:51.121 18:04:09 -- scripts/common.sh@352 -- # local d=1 00:05:51.121 18:04:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.121 18:04:09 -- scripts/common.sh@354 -- # echo 1 00:05:51.121 18:04:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.121 18:04:09 -- scripts/common.sh@365 -- # decimal 2 00:05:51.121 18:04:09 -- scripts/common.sh@352 -- # local d=2 00:05:51.121 18:04:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.121 18:04:09 -- scripts/common.sh@354 -- # echo 2 00:05:51.121 18:04:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.121 18:04:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.122 18:04:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.122 18:04:09 -- scripts/common.sh@367 -- # return 0 00:05:51.122 18:04:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.122 18:04:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.122 --rc genhtml_branch_coverage=1 00:05:51.122 --rc genhtml_function_coverage=1 00:05:51.122 --rc genhtml_legend=1 00:05:51.122 --rc geninfo_all_blocks=1 00:05:51.122 --rc geninfo_unexecuted_blocks=1 00:05:51.122 00:05:51.122 ' 00:05:51.122 18:04:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.122 --rc genhtml_branch_coverage=1 00:05:51.122 --rc genhtml_function_coverage=1 00:05:51.122 --rc genhtml_legend=1 00:05:51.122 --rc geninfo_all_blocks=1 00:05:51.122 --rc geninfo_unexecuted_blocks=1 00:05:51.122 00:05:51.122 ' 00:05:51.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.122 18:04:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.122 --rc genhtml_branch_coverage=1 00:05:51.122 --rc genhtml_function_coverage=1 00:05:51.122 --rc genhtml_legend=1 00:05:51.122 --rc geninfo_all_blocks=1 00:05:51.122 --rc geninfo_unexecuted_blocks=1 00:05:51.122 00:05:51.122 ' 00:05:51.122 18:04:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.122 --rc genhtml_branch_coverage=1 00:05:51.122 --rc genhtml_function_coverage=1 00:05:51.122 --rc genhtml_legend=1 00:05:51.122 --rc geninfo_all_blocks=1 00:05:51.122 --rc geninfo_unexecuted_blocks=1 00:05:51.122 00:05:51.122 ' 00:05:51.122 18:04:09 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.122 18:04:09 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57263 00:05:51.122 18:04:09 -- accel/accel_rpc.sh@15 -- # waitforlisten 57263 00:05:51.122 18:04:09 -- common/autotest_common.sh@829 -- # '[' -z 57263 ']' 00:05:51.122 18:04:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.122 18:04:09 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:51.122 18:04:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.122 18:04:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.122 18:04:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.122 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.381 [2024-11-18 18:04:09.735813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.381 [2024-11-18 18:04:09.735923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57263 ] 00:05:51.381 [2024-11-18 18:04:09.874019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.381 [2024-11-18 18:04:09.927094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.381 [2024-11-18 18:04:09.927248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.381 18:04:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.381 18:04:09 -- common/autotest_common.sh@862 -- # return 0 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:51.381 18:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.381 18:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.381 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.381 ************************************ 00:05:51.381 START TEST accel_assign_opcode 00:05:51.381 ************************************ 00:05:51.381 18:04:09 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:05:51.381 18:04:09 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:51.381 18:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.381 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.381 [2024-11-18 18:04:09.979676] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:51.640 18:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.640 18:04:09 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:51.640 18:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.640 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.640 [2024-11-18 18:04:09.987668] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:51.640 18:04:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.640 18:04:09 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:51.640 18:04:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.640 18:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:51.640 18:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.640 18:04:10 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:51.640 18:04:10 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:51.640 18:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.640 18:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:51.640 18:04:10 -- accel/accel_rpc.sh@42 -- # grep software 00:05:51.640 18:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.640 software 00:05:51.640 ************************************ 00:05:51.640 END TEST accel_assign_opcode 00:05:51.640 ************************************ 00:05:51.640 00:05:51.640 real 0m0.197s 00:05:51.640 user 0m0.062s 00:05:51.640 sys 0m0.008s 00:05:51.640 18:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.640 18:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:51.640 18:04:10 -- accel/accel_rpc.sh@55 -- # killprocess 57263 00:05:51.640 18:04:10 -- common/autotest_common.sh@936 -- # '[' -z 57263 ']' 00:05:51.640 18:04:10 -- common/autotest_common.sh@940 -- # kill -0 57263 00:05:51.640 18:04:10 -- common/autotest_common.sh@941 -- # uname 00:05:51.640 18:04:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.640 18:04:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57263 00:05:51.899 killing process with pid 57263 00:05:51.899 18:04:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.899 18:04:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.899 18:04:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57263' 00:05:51.899 18:04:10 -- common/autotest_common.sh@955 -- # kill 57263 00:05:51.899 18:04:10 -- common/autotest_common.sh@960 -- # wait 57263 00:05:52.159 00:05:52.159 real 0m1.005s 00:05:52.159 user 0m1.014s 00:05:52.159 sys 0m0.315s 00:05:52.159 18:04:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.159 18:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.159 ************************************ 00:05:52.159 END TEST accel_rpc 00:05:52.159 ************************************ 00:05:52.159 18:04:10 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.159 18:04:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.159 18:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.159 ************************************ 00:05:52.159 START TEST app_cmdline 00:05:52.159 ************************************ 00:05:52.159 18:04:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:52.159 * Looking for test storage... 00:05:52.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:52.159 18:04:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:52.159 18:04:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:52.159 18:04:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:52.159 18:04:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:52.159 18:04:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:52.159 18:04:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:52.159 18:04:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:52.159 18:04:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:52.159 18:04:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.159 18:04:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:52.159 18:04:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:52.159 18:04:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:52.159 18:04:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:52.159 18:04:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:52.159 18:04:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:52.159 18:04:10 -- scripts/common.sh@344 -- # : 1 00:05:52.159 18:04:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:52.159 18:04:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.159 18:04:10 -- scripts/common.sh@364 -- # decimal 1 00:05:52.159 18:04:10 -- scripts/common.sh@352 -- # local d=1 00:05:52.159 18:04:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.159 18:04:10 -- scripts/common.sh@354 -- # echo 1 00:05:52.159 18:04:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:52.159 18:04:10 -- scripts/common.sh@365 -- # decimal 2 00:05:52.159 18:04:10 -- scripts/common.sh@352 -- # local d=2 00:05:52.159 18:04:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.159 18:04:10 -- scripts/common.sh@354 -- # echo 2 00:05:52.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.159 18:04:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:52.159 18:04:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:52.159 18:04:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:52.159 18:04:10 -- scripts/common.sh@367 -- # return 0 00:05:52.159 18:04:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:52.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.159 --rc genhtml_branch_coverage=1 00:05:52.159 --rc genhtml_function_coverage=1 00:05:52.159 --rc genhtml_legend=1 00:05:52.159 --rc geninfo_all_blocks=1 00:05:52.159 --rc geninfo_unexecuted_blocks=1 00:05:52.159 00:05:52.159 ' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:52.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.159 --rc genhtml_branch_coverage=1 00:05:52.159 --rc genhtml_function_coverage=1 00:05:52.159 --rc genhtml_legend=1 00:05:52.159 --rc geninfo_all_blocks=1 00:05:52.159 --rc geninfo_unexecuted_blocks=1 00:05:52.159 00:05:52.159 ' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:52.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.159 --rc genhtml_branch_coverage=1 00:05:52.159 --rc genhtml_function_coverage=1 00:05:52.159 --rc genhtml_legend=1 00:05:52.159 --rc geninfo_all_blocks=1 00:05:52.159 --rc geninfo_unexecuted_blocks=1 00:05:52.159 00:05:52.159 ' 00:05:52.159 18:04:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:52.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.159 --rc genhtml_branch_coverage=1 00:05:52.159 --rc genhtml_function_coverage=1 00:05:52.159 --rc genhtml_legend=1 00:05:52.159 --rc geninfo_all_blocks=1 00:05:52.159 --rc geninfo_unexecuted_blocks=1 00:05:52.159 00:05:52.159 ' 00:05:52.159 18:04:10 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:52.159 18:04:10 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57351 00:05:52.159 18:04:10 -- app/cmdline.sh@18 -- # waitforlisten 57351 00:05:52.159 18:04:10 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:52.159 18:04:10 -- common/autotest_common.sh@829 -- # '[' -z 57351 ']' 00:05:52.160 18:04:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.160 18:04:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.160 18:04:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.160 18:04:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.160 18:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.419 [2024-11-18 18:04:10.775686] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.419 [2024-11-18 18:04:10.775967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57351 ] 00:05:52.419 [2024-11-18 18:04:10.912017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.419 [2024-11-18 18:04:10.966491] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.419 [2024-11-18 18:04:10.966936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.354 18:04:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.354 18:04:11 -- common/autotest_common.sh@862 -- # return 0 00:05:53.354 18:04:11 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:53.354 { 00:05:53.354 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:05:53.354 "fields": { 00:05:53.354 "major": 24, 00:05:53.354 "minor": 1, 00:05:53.354 "patch": 1, 00:05:53.354 "suffix": "-pre", 00:05:53.354 "commit": "c13c99a5e" 00:05:53.354 } 00:05:53.354 } 00:05:53.354 18:04:11 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.354 18:04:11 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.354 18:04:11 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.354 18:04:11 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.354 18:04:11 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.354 18:04:11 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.354 18:04:11 -- app/cmdline.sh@26 -- # sort 00:05:53.354 18:04:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.354 18:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:53.612 18:04:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.612 18:04:12 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.613 18:04:12 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.613 18:04:12 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.613 18:04:12 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.613 18:04:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.613 18:04:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.613 18:04:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.613 18:04:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.613 18:04:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.613 18:04:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.613 18:04:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.613 18:04:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:53.613 18:04:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:53.613 18:04:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.872 request: 00:05:53.872 { 00:05:53.872 "method": "env_dpdk_get_mem_stats", 00:05:53.872 "req_id": 1 00:05:53.872 } 00:05:53.872 Got JSON-RPC error response 00:05:53.872 response: 00:05:53.872 { 00:05:53.872 "code": -32601, 00:05:53.872 "message": "Method not found" 00:05:53.872 } 00:05:53.872 18:04:12 -- common/autotest_common.sh@653 -- # es=1 00:05:53.872 18:04:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.872 18:04:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.872 18:04:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.872 18:04:12 -- app/cmdline.sh@1 -- # killprocess 57351 00:05:53.872 18:04:12 -- common/autotest_common.sh@936 -- # '[' -z 57351 ']' 00:05:53.872 18:04:12 -- common/autotest_common.sh@940 -- # kill -0 57351 00:05:53.872 18:04:12 -- common/autotest_common.sh@941 -- # uname 00:05:53.872 18:04:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.872 18:04:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57351 00:05:53.872 killing process with pid 57351 00:05:53.872 18:04:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.872 18:04:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.872 18:04:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57351' 00:05:53.872 18:04:12 -- common/autotest_common.sh@955 -- # kill 57351 00:05:53.872 18:04:12 -- common/autotest_common.sh@960 -- # wait 57351 00:05:54.131 ************************************ 00:05:54.131 END TEST app_cmdline 00:05:54.131 ************************************ 00:05:54.131 00:05:54.131 real 0m2.025s 00:05:54.131 user 0m2.644s 00:05:54.131 sys 0m0.358s 00:05:54.131 18:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.131 18:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.131 18:04:12 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:54.131 18:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.131 18:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.131 18:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.131 ************************************ 00:05:54.131 START TEST version 00:05:54.131 ************************************ 00:05:54.131 18:04:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:54.131 * Looking for test storage... 00:05:54.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:54.131 18:04:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.131 18:04:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.131 18:04:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.391 18:04:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.391 18:04:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.391 18:04:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.391 18:04:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.391 18:04:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.391 18:04:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.391 18:04:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.391 18:04:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.391 18:04:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.391 18:04:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.391 18:04:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.391 18:04:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.391 18:04:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.391 18:04:12 -- scripts/common.sh@344 -- # : 1 00:05:54.391 18:04:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.391 18:04:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.391 18:04:12 -- scripts/common.sh@364 -- # decimal 1 00:05:54.391 18:04:12 -- scripts/common.sh@352 -- # local d=1 00:05:54.391 18:04:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.391 18:04:12 -- scripts/common.sh@354 -- # echo 1 00:05:54.391 18:04:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.391 18:04:12 -- scripts/common.sh@365 -- # decimal 2 00:05:54.391 18:04:12 -- scripts/common.sh@352 -- # local d=2 00:05:54.391 18:04:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.391 18:04:12 -- scripts/common.sh@354 -- # echo 2 00:05:54.391 18:04:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.391 18:04:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.391 18:04:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.391 18:04:12 -- scripts/common.sh@367 -- # return 0 00:05:54.391 18:04:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.391 18:04:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 18:04:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 18:04:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 18:04:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.391 --rc genhtml_branch_coverage=1 00:05:54.391 --rc genhtml_function_coverage=1 00:05:54.391 --rc genhtml_legend=1 00:05:54.391 --rc geninfo_all_blocks=1 00:05:54.391 --rc geninfo_unexecuted_blocks=1 00:05:54.391 00:05:54.391 ' 00:05:54.391 18:04:12 -- app/version.sh@17 -- # get_header_version major 00:05:54.391 18:04:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:54.391 18:04:12 -- app/version.sh@14 -- # cut -f2 00:05:54.391 18:04:12 -- app/version.sh@14 -- # tr -d '"' 00:05:54.391 18:04:12 -- app/version.sh@17 -- # major=24 00:05:54.391 18:04:12 -- app/version.sh@18 -- # get_header_version minor 00:05:54.391 18:04:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:54.391 18:04:12 -- app/version.sh@14 -- # tr -d '"' 00:05:54.391 18:04:12 -- app/version.sh@14 -- # cut -f2 00:05:54.391 18:04:12 -- app/version.sh@18 -- # minor=1 00:05:54.391 18:04:12 -- app/version.sh@19 -- # get_header_version patch 00:05:54.391 18:04:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:54.391 18:04:12 -- app/version.sh@14 -- # tr -d '"' 00:05:54.391 18:04:12 -- app/version.sh@14 -- # cut -f2 00:05:54.391 18:04:12 -- app/version.sh@19 -- # patch=1 00:05:54.391 18:04:12 -- app/version.sh@20 -- # get_header_version suffix 00:05:54.391 18:04:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:54.391 18:04:12 -- app/version.sh@14 -- # cut -f2 00:05:54.391 18:04:12 -- app/version.sh@14 -- # tr -d '"' 00:05:54.391 18:04:12 -- app/version.sh@20 -- # suffix=-pre 00:05:54.391 18:04:12 -- app/version.sh@22 -- # version=24.1 00:05:54.391 18:04:12 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:54.391 18:04:12 -- app/version.sh@25 -- # version=24.1.1 00:05:54.391 18:04:12 -- app/version.sh@28 -- # version=24.1.1rc0 00:05:54.392 18:04:12 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:54.392 18:04:12 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:54.392 18:04:12 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:05:54.392 18:04:12 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:05:54.392 00:05:54.392 real 0m0.240s 00:05:54.392 user 0m0.161s 00:05:54.392 sys 0m0.114s 00:05:54.392 18:04:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.392 ************************************ 00:05:54.392 END TEST version 00:05:54.392 ************************************ 00:05:54.392 18:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.392 18:04:12 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:05:54.392 18:04:12 -- spdk/autotest.sh@191 -- # uname -s 00:05:54.392 18:04:12 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:05:54.392 18:04:12 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:05:54.392 18:04:12 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:05:54.392 18:04:12 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:05:54.392 18:04:12 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:54.392 18:04:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.392 18:04:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.392 18:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.392 ************************************ 00:05:54.392 START TEST spdk_dd 00:05:54.392 ************************************ 00:05:54.392 18:04:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:54.392 * Looking for test storage... 00:05:54.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:54.652 18:04:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.652 18:04:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.652 18:04:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.652 18:04:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.652 18:04:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.652 18:04:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.652 18:04:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.652 18:04:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.652 18:04:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.652 18:04:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.652 18:04:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.652 18:04:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.652 18:04:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.652 18:04:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.652 18:04:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.652 18:04:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.652 18:04:13 -- scripts/common.sh@344 -- # : 1 00:05:54.652 18:04:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.652 18:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.652 18:04:13 -- scripts/common.sh@364 -- # decimal 1 00:05:54.652 18:04:13 -- scripts/common.sh@352 -- # local d=1 00:05:54.652 18:04:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.652 18:04:13 -- scripts/common.sh@354 -- # echo 1 00:05:54.652 18:04:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.652 18:04:13 -- scripts/common.sh@365 -- # decimal 2 00:05:54.652 18:04:13 -- scripts/common.sh@352 -- # local d=2 00:05:54.652 18:04:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.652 18:04:13 -- scripts/common.sh@354 -- # echo 2 00:05:54.652 18:04:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.652 18:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.652 18:04:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.652 18:04:13 -- scripts/common.sh@367 -- # return 0 00:05:54.652 18:04:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.652 18:04:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.652 --rc genhtml_branch_coverage=1 00:05:54.652 --rc genhtml_function_coverage=1 00:05:54.652 --rc genhtml_legend=1 00:05:54.652 --rc geninfo_all_blocks=1 00:05:54.652 --rc geninfo_unexecuted_blocks=1 00:05:54.652 00:05:54.652 ' 00:05:54.652 18:04:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.652 --rc genhtml_branch_coverage=1 00:05:54.652 --rc genhtml_function_coverage=1 00:05:54.652 --rc genhtml_legend=1 00:05:54.652 --rc geninfo_all_blocks=1 00:05:54.652 --rc geninfo_unexecuted_blocks=1 00:05:54.652 00:05:54.652 ' 00:05:54.652 18:04:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.652 --rc genhtml_branch_coverage=1 00:05:54.652 --rc genhtml_function_coverage=1 00:05:54.652 --rc genhtml_legend=1 00:05:54.652 --rc geninfo_all_blocks=1 00:05:54.652 --rc geninfo_unexecuted_blocks=1 00:05:54.652 00:05:54.652 ' 00:05:54.652 18:04:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.652 --rc genhtml_branch_coverage=1 00:05:54.652 --rc genhtml_function_coverage=1 00:05:54.652 --rc genhtml_legend=1 00:05:54.652 --rc geninfo_all_blocks=1 00:05:54.652 --rc geninfo_unexecuted_blocks=1 00:05:54.652 00:05:54.652 ' 00:05:54.652 18:04:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.652 18:04:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.652 18:04:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.652 18:04:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.652 18:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.652 18:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.652 18:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.652 18:04:13 -- paths/export.sh@5 -- # export PATH 00:05:54.652 18:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.652 18:04:13 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:54.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.912 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.912 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:54.912 18:04:13 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:54.912 18:04:13 -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:54.912 18:04:13 -- scripts/common.sh@311 -- # local bdf bdfs 00:05:54.912 18:04:13 -- scripts/common.sh@312 -- # local nvmes 00:05:54.912 18:04:13 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:05:54.912 18:04:13 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:54.912 18:04:13 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:05:54.912 18:04:13 -- scripts/common.sh@297 -- # local bdf= 00:05:54.912 18:04:13 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:05:54.912 18:04:13 -- scripts/common.sh@232 -- # local class 00:05:54.912 18:04:13 -- scripts/common.sh@233 -- # local subclass 00:05:54.912 18:04:13 -- scripts/common.sh@234 -- # local progif 00:05:54.912 18:04:13 -- scripts/common.sh@235 -- # printf %02x 1 00:05:54.912 18:04:13 -- scripts/common.sh@235 -- # class=01 00:05:54.912 18:04:13 -- scripts/common.sh@236 -- # printf %02x 8 00:05:54.912 18:04:13 -- scripts/common.sh@236 -- # subclass=08 00:05:54.912 18:04:13 -- scripts/common.sh@237 -- # printf %02x 2 00:05:54.912 18:04:13 -- scripts/common.sh@237 -- # progif=02 00:05:54.912 18:04:13 -- scripts/common.sh@239 -- # hash lspci 00:05:54.912 18:04:13 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:05:54.912 18:04:13 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:05:54.912 18:04:13 -- scripts/common.sh@242 -- # grep -i -- -p02 00:05:54.912 18:04:13 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:54.912 18:04:13 -- scripts/common.sh@244 -- # tr -d '"' 00:05:54.912 18:04:13 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:54.912 18:04:13 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:05:54.912 18:04:13 -- scripts/common.sh@15 -- # local i 00:05:54.912 18:04:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:05:54.912 18:04:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:54.912 18:04:13 -- scripts/common.sh@24 -- # return 0 00:05:54.912 18:04:13 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:05:54.912 18:04:13 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:54.912 18:04:13 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:05:54.912 18:04:13 -- scripts/common.sh@15 -- # local i 00:05:54.912 18:04:13 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:05:54.912 18:04:13 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:54.912 18:04:13 -- scripts/common.sh@24 -- # return 0 00:05:54.912 18:04:13 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:05:54.912 18:04:13 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:54.912 18:04:13 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:05:54.912 18:04:13 -- scripts/common.sh@322 -- # uname -s 00:05:55.173 18:04:13 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:55.173 18:04:13 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:55.173 18:04:13 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:05:55.173 18:04:13 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:05:55.174 18:04:13 -- scripts/common.sh@322 -- # uname -s 00:05:55.174 18:04:13 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:05:55.174 18:04:13 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:05:55.174 18:04:13 -- scripts/common.sh@327 -- # (( 2 )) 00:05:55.174 18:04:13 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:55.174 18:04:13 -- dd/dd.sh@13 -- # check_liburing 00:05:55.174 18:04:13 -- dd/common.sh@139 -- # local lib so 00:05:55.174 18:04:13 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:05:55.174 18:04:13 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:55.174 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.174 18:04:13 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@142 -- # read -r lib _ so _ 00:05:55.175 18:04:13 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:55.175 18:04:13 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:55.175 * spdk_dd linked to liburing 00:05:55.175 18:04:13 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:55.175 18:04:13 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:55.175 18:04:13 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:55.175 18:04:13 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:55.175 18:04:13 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:55.175 18:04:13 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:55.175 18:04:13 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:55.175 18:04:13 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:55.175 18:04:13 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:55.175 18:04:13 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:55.175 18:04:13 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:55.175 18:04:13 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:55.175 18:04:13 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:55.175 18:04:13 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:55.175 18:04:13 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:55.175 18:04:13 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:55.175 18:04:13 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:55.175 18:04:13 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:55.175 18:04:13 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:55.175 18:04:13 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:55.175 18:04:13 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:55.175 18:04:13 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:55.175 18:04:13 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:55.175 18:04:13 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:55.175 18:04:13 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:55.175 18:04:13 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:55.175 18:04:13 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:55.175 18:04:13 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:55.175 18:04:13 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:55.175 18:04:13 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:55.175 18:04:13 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:55.175 18:04:13 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:55.175 18:04:13 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:55.175 18:04:13 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:55.175 18:04:13 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:55.175 18:04:13 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:55.175 18:04:13 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:55.175 18:04:13 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:55.175 18:04:13 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:55.175 18:04:13 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:55.175 18:04:13 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:55.175 18:04:13 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:55.175 18:04:13 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:55.175 18:04:13 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:55.175 18:04:13 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:55.175 18:04:13 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:55.175 18:04:13 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:55.175 18:04:13 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:05:55.175 18:04:13 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:55.175 18:04:13 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:05:55.175 18:04:13 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:05:55.175 18:04:13 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:05:55.175 18:04:13 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:05:55.175 18:04:13 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:05:55.175 18:04:13 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:05:55.175 18:04:13 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:05:55.175 18:04:13 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:05:55.175 18:04:13 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:05:55.175 18:04:13 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:05:55.175 18:04:13 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:05:55.175 18:04:13 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:05:55.175 18:04:13 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:55.175 18:04:13 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:05:55.175 18:04:13 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:05:55.175 18:04:13 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:05:55.175 18:04:13 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:05:55.175 18:04:13 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:05:55.175 18:04:13 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:05:55.175 18:04:13 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:05:55.175 18:04:13 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:05:55.175 18:04:13 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:05:55.175 18:04:13 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:05:55.175 18:04:13 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:55.175 18:04:13 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:05:55.175 18:04:13 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:05:55.175 18:04:13 -- dd/common.sh@149 -- # [[ y != y ]] 00:05:55.175 18:04:13 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:05:55.175 18:04:13 -- dd/common.sh@156 -- # export liburing_in_use=1 00:05:55.175 18:04:13 -- dd/common.sh@156 -- # liburing_in_use=1 00:05:55.175 18:04:13 -- dd/common.sh@157 -- # return 0 00:05:55.175 18:04:13 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:55.175 18:04:13 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:55.175 18:04:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:55.175 18:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.175 18:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.175 ************************************ 00:05:55.175 START TEST spdk_dd_basic_rw 00:05:55.175 ************************************ 00:05:55.175 18:04:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:05:55.175 * Looking for test storage... 00:05:55.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:55.175 18:04:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.175 18:04:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.175 18:04:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.175 18:04:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.175 18:04:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.175 18:04:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.175 18:04:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.175 18:04:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.175 18:04:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.175 18:04:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.176 18:04:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.176 18:04:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.176 18:04:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.176 18:04:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.176 18:04:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.176 18:04:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.176 18:04:13 -- scripts/common.sh@344 -- # : 1 00:05:55.176 18:04:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.176 18:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.176 18:04:13 -- scripts/common.sh@364 -- # decimal 1 00:05:55.176 18:04:13 -- scripts/common.sh@352 -- # local d=1 00:05:55.176 18:04:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.176 18:04:13 -- scripts/common.sh@354 -- # echo 1 00:05:55.176 18:04:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.176 18:04:13 -- scripts/common.sh@365 -- # decimal 2 00:05:55.176 18:04:13 -- scripts/common.sh@352 -- # local d=2 00:05:55.176 18:04:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.176 18:04:13 -- scripts/common.sh@354 -- # echo 2 00:05:55.176 18:04:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.176 18:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.176 18:04:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.176 18:04:13 -- scripts/common.sh@367 -- # return 0 00:05:55.176 18:04:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.176 18:04:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.176 --rc genhtml_branch_coverage=1 00:05:55.176 --rc genhtml_function_coverage=1 00:05:55.176 --rc genhtml_legend=1 00:05:55.176 --rc geninfo_all_blocks=1 00:05:55.176 --rc geninfo_unexecuted_blocks=1 00:05:55.176 00:05:55.176 ' 00:05:55.176 18:04:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.176 --rc genhtml_branch_coverage=1 00:05:55.176 --rc genhtml_function_coverage=1 00:05:55.176 --rc genhtml_legend=1 00:05:55.176 --rc geninfo_all_blocks=1 00:05:55.176 --rc geninfo_unexecuted_blocks=1 00:05:55.176 00:05:55.176 ' 00:05:55.176 18:04:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.176 --rc genhtml_branch_coverage=1 00:05:55.176 --rc genhtml_function_coverage=1 00:05:55.176 --rc genhtml_legend=1 00:05:55.176 --rc geninfo_all_blocks=1 00:05:55.176 --rc geninfo_unexecuted_blocks=1 00:05:55.176 00:05:55.176 ' 00:05:55.176 18:04:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.176 --rc genhtml_branch_coverage=1 00:05:55.176 --rc genhtml_function_coverage=1 00:05:55.176 --rc genhtml_legend=1 00:05:55.176 --rc geninfo_all_blocks=1 00:05:55.176 --rc geninfo_unexecuted_blocks=1 00:05:55.176 00:05:55.176 ' 00:05:55.176 18:04:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.176 18:04:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.176 18:04:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.438 18:04:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.438 18:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.438 18:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.438 18:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.438 18:04:13 -- paths/export.sh@5 -- # export PATH 00:05:55.438 18:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.438 18:04:13 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:55.438 18:04:13 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:55.438 18:04:13 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:55.438 18:04:13 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:05:55.438 18:04:13 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:55.438 18:04:13 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:05:55.438 18:04:13 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:55.438 18:04:13 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.438 18:04:13 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.438 18:04:13 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:05:55.438 18:04:13 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:05:55.438 18:04:13 -- dd/common.sh@126 -- # mapfile -t id 00:05:55.438 18:04:13 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:05:55.439 18:04:13 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2192 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:55.439 18:04:13 -- dd/common.sh@130 -- # lbaf=04 00:05:55.439 18:04:13 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2192 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:55.439 18:04:13 -- dd/common.sh@132 -- # lbaf=4096 00:05:55.439 18:04:13 -- dd/common.sh@134 -- # echo 4096 00:05:55.439 18:04:13 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:55.439 18:04:13 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:55.439 18:04:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:55.439 18:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.439 18:04:13 -- dd/basic_rw.sh@96 -- # gen_conf 00:05:55.439 18:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.439 18:04:13 -- dd/basic_rw.sh@96 -- # : 00:05:55.439 18:04:13 -- dd/common.sh@31 -- # xtrace_disable 00:05:55.439 18:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:55.439 ************************************ 00:05:55.439 START TEST dd_bs_lt_native_bs 00:05:55.439 ************************************ 00:05:55.439 18:04:13 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:55.439 18:04:13 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.439 18:04:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:55.439 18:04:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.439 18:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.439 18:04:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.439 18:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.439 18:04:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.440 18:04:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.440 18:04:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.440 18:04:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:55.440 18:04:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:55.440 { 00:05:55.440 "subsystems": [ 00:05:55.440 { 00:05:55.440 "subsystem": "bdev", 00:05:55.440 "config": [ 00:05:55.440 { 00:05:55.440 "params": { 00:05:55.440 "trtype": "pcie", 00:05:55.440 "traddr": "0000:00:06.0", 00:05:55.440 "name": "Nvme0" 00:05:55.440 }, 00:05:55.440 "method": "bdev_nvme_attach_controller" 00:05:55.440 }, 00:05:55.440 { 00:05:55.440 "method": "bdev_wait_for_examine" 00:05:55.440 } 00:05:55.440 ] 00:05:55.440 } 00:05:55.440 ] 00:05:55.440 } 00:05:55.440 [2024-11-18 18:04:14.029000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.440 [2024-11-18 18:04:14.029098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57702 ] 00:05:55.699 [2024-11-18 18:04:14.166743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.699 [2024-11-18 18:04:14.213546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.959 [2024-11-18 18:04:14.317113] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:55.959 [2024-11-18 18:04:14.317169] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.959 [2024-11-18 18:04:14.384618] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:05:55.959 ************************************ 00:05:55.959 END TEST dd_bs_lt_native_bs 00:05:55.959 ************************************ 00:05:55.959 18:04:14 -- common/autotest_common.sh@653 -- # es=234 00:05:55.959 18:04:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.959 18:04:14 -- common/autotest_common.sh@662 -- # es=106 00:05:55.959 18:04:14 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:55.959 18:04:14 -- common/autotest_common.sh@670 -- # es=1 00:05:55.959 18:04:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.959 00:05:55.959 real 0m0.506s 00:05:55.959 user 0m0.362s 00:05:55.959 sys 0m0.105s 00:05:55.959 18:04:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.959 18:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.959 18:04:14 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:55.959 18:04:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:55.959 18:04:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.959 18:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.959 ************************************ 00:05:55.959 START TEST dd_rw 00:05:55.959 ************************************ 00:05:55.959 18:04:14 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:05:55.959 18:04:14 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:55.959 18:04:14 -- dd/basic_rw.sh@12 -- # local count size 00:05:55.959 18:04:14 -- dd/basic_rw.sh@13 -- # local qds bss 00:05:55.959 18:04:14 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:55.959 18:04:14 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:55.959 18:04:14 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:55.959 18:04:14 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:55.959 18:04:14 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:55.959 18:04:14 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:55.959 18:04:14 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:55.959 18:04:14 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:55.959 18:04:14 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.959 18:04:14 -- dd/basic_rw.sh@23 -- # count=15 00:05:55.959 18:04:14 -- dd/basic_rw.sh@24 -- # count=15 00:05:55.959 18:04:14 -- dd/basic_rw.sh@25 -- # size=61440 00:05:55.959 18:04:14 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:55.959 18:04:14 -- dd/common.sh@98 -- # xtrace_disable 00:05:55.959 18:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:56.527 18:04:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:56.527 18:04:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:56.527 18:04:15 -- dd/common.sh@31 -- # xtrace_disable 00:05:56.527 18:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.786 [2024-11-18 18:04:15.149043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.786 [2024-11-18 18:04:15.149164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:05:56.786 { 00:05:56.786 "subsystems": [ 00:05:56.786 { 00:05:56.786 "subsystem": "bdev", 00:05:56.786 "config": [ 00:05:56.786 { 00:05:56.786 "params": { 00:05:56.786 "trtype": "pcie", 00:05:56.786 "traddr": "0000:00:06.0", 00:05:56.786 "name": "Nvme0" 00:05:56.786 }, 00:05:56.786 "method": "bdev_nvme_attach_controller" 00:05:56.786 }, 00:05:56.786 { 00:05:56.786 "method": "bdev_wait_for_examine" 00:05:56.786 } 00:05:56.786 ] 00:05:56.786 } 00:05:56.786 ] 00:05:56.786 } 00:05:56.786 [2024-11-18 18:04:15.289536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.786 [2024-11-18 18:04:15.335972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.045  [2024-11-18T18:04:15.649Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:57.045 00:05:57.045 18:04:15 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:57.045 18:04:15 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:57.045 18:04:15 -- dd/common.sh@31 -- # xtrace_disable 00:05:57.045 18:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.304 [2024-11-18 18:04:15.684831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.304 [2024-11-18 18:04:15.684964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57744 ] 00:05:57.304 { 00:05:57.304 "subsystems": [ 00:05:57.304 { 00:05:57.304 "subsystem": "bdev", 00:05:57.304 "config": [ 00:05:57.304 { 00:05:57.304 "params": { 00:05:57.304 "trtype": "pcie", 00:05:57.304 "traddr": "0000:00:06.0", 00:05:57.304 "name": "Nvme0" 00:05:57.304 }, 00:05:57.304 "method": "bdev_nvme_attach_controller" 00:05:57.304 }, 00:05:57.304 { 00:05:57.304 "method": "bdev_wait_for_examine" 00:05:57.304 } 00:05:57.304 ] 00:05:57.304 } 00:05:57.304 ] 00:05:57.304 } 00:05:57.304 [2024-11-18 18:04:15.826624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.304 [2024-11-18 18:04:15.875090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.563  [2024-11-18T18:04:16.167Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:57.563 00:05:57.563 18:04:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.563 18:04:16 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:57.563 18:04:16 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.563 18:04:16 -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.563 18:04:16 -- dd/common.sh@12 -- # local size=61440 00:05:57.563 18:04:16 -- dd/common.sh@14 -- # local bs=1048576 00:05:57.563 18:04:16 -- dd/common.sh@15 -- # local count=1 00:05:57.563 18:04:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.563 18:04:16 -- dd/common.sh@18 -- # gen_conf 00:05:57.563 18:04:16 -- dd/common.sh@31 -- # xtrace_disable 00:05:57.563 18:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.823 [2024-11-18 18:04:16.210266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.823 [2024-11-18 18:04:16.210789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57759 ] 00:05:57.823 { 00:05:57.823 "subsystems": [ 00:05:57.823 { 00:05:57.823 "subsystem": "bdev", 00:05:57.823 "config": [ 00:05:57.823 { 00:05:57.823 "params": { 00:05:57.823 "trtype": "pcie", 00:05:57.823 "traddr": "0000:00:06.0", 00:05:57.823 "name": "Nvme0" 00:05:57.823 }, 00:05:57.823 "method": "bdev_nvme_attach_controller" 00:05:57.823 }, 00:05:57.823 { 00:05:57.823 "method": "bdev_wait_for_examine" 00:05:57.823 } 00:05:57.823 ] 00:05:57.823 } 00:05:57.823 ] 00:05:57.823 } 00:05:57.823 [2024-11-18 18:04:16.349213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.823 [2024-11-18 18:04:16.395105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.082  [2024-11-18T18:04:16.686Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.082 00:05:58.082 18:04:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:58.082 18:04:16 -- dd/basic_rw.sh@23 -- # count=15 00:05:58.082 18:04:16 -- dd/basic_rw.sh@24 -- # count=15 00:05:58.082 18:04:16 -- dd/basic_rw.sh@25 -- # size=61440 00:05:58.082 18:04:16 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:58.082 18:04:16 -- dd/common.sh@98 -- # xtrace_disable 00:05:58.082 18:04:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.650 18:04:17 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:58.650 18:04:17 -- dd/basic_rw.sh@30 -- # gen_conf 00:05:58.650 18:04:17 -- dd/common.sh@31 -- # xtrace_disable 00:05:58.650 18:04:17 -- common/autotest_common.sh@10 -- # set +x 00:05:58.650 [2024-11-18 18:04:17.202995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.650 [2024-11-18 18:04:17.203281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57777 ] 00:05:58.650 { 00:05:58.650 "subsystems": [ 00:05:58.650 { 00:05:58.650 "subsystem": "bdev", 00:05:58.650 "config": [ 00:05:58.650 { 00:05:58.650 "params": { 00:05:58.650 "trtype": "pcie", 00:05:58.650 "traddr": "0000:00:06.0", 00:05:58.650 "name": "Nvme0" 00:05:58.650 }, 00:05:58.650 "method": "bdev_nvme_attach_controller" 00:05:58.650 }, 00:05:58.650 { 00:05:58.650 "method": "bdev_wait_for_examine" 00:05:58.650 } 00:05:58.650 ] 00:05:58.650 } 00:05:58.650 ] 00:05:58.650 } 00:05:58.910 [2024-11-18 18:04:17.341872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.910 [2024-11-18 18:04:17.387502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.910  [2024-11-18T18:04:17.772Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:59.168 00:05:59.168 18:04:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:59.168 18:04:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:05:59.168 18:04:17 -- dd/common.sh@31 -- # xtrace_disable 00:05:59.168 18:04:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.169 [2024-11-18 18:04:17.713427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.169 [2024-11-18 18:04:17.713757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57784 ] 00:05:59.169 { 00:05:59.169 "subsystems": [ 00:05:59.169 { 00:05:59.169 "subsystem": "bdev", 00:05:59.169 "config": [ 00:05:59.169 { 00:05:59.169 "params": { 00:05:59.169 "trtype": "pcie", 00:05:59.169 "traddr": "0000:00:06.0", 00:05:59.169 "name": "Nvme0" 00:05:59.169 }, 00:05:59.169 "method": "bdev_nvme_attach_controller" 00:05:59.169 }, 00:05:59.169 { 00:05:59.169 "method": "bdev_wait_for_examine" 00:05:59.169 } 00:05:59.169 ] 00:05:59.169 } 00:05:59.169 ] 00:05:59.169 } 00:05:59.428 [2024-11-18 18:04:17.851202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.428 [2024-11-18 18:04:17.902340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.428  [2024-11-18T18:04:18.291Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:59.687 00:05:59.687 18:04:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.687 18:04:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:59.687 18:04:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:59.687 18:04:18 -- dd/common.sh@11 -- # local nvme_ref= 00:05:59.687 18:04:18 -- dd/common.sh@12 -- # local size=61440 00:05:59.687 18:04:18 -- dd/common.sh@14 -- # local bs=1048576 00:05:59.687 18:04:18 -- dd/common.sh@15 -- # local count=1 00:05:59.687 18:04:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:59.687 18:04:18 -- dd/common.sh@18 -- # gen_conf 00:05:59.687 18:04:18 -- dd/common.sh@31 -- # xtrace_disable 00:05:59.687 18:04:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.687 [2024-11-18 18:04:18.242489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.687 [2024-11-18 18:04:18.243001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57803 ] 00:05:59.687 { 00:05:59.687 "subsystems": [ 00:05:59.687 { 00:05:59.687 "subsystem": "bdev", 00:05:59.687 "config": [ 00:05:59.687 { 00:05:59.687 "params": { 00:05:59.687 "trtype": "pcie", 00:05:59.687 "traddr": "0000:00:06.0", 00:05:59.687 "name": "Nvme0" 00:05:59.687 }, 00:05:59.687 "method": "bdev_nvme_attach_controller" 00:05:59.687 }, 00:05:59.687 { 00:05:59.687 "method": "bdev_wait_for_examine" 00:05:59.687 } 00:05:59.687 ] 00:05:59.687 } 00:05:59.687 ] 00:05:59.687 } 00:05:59.947 [2024-11-18 18:04:18.381081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.947 [2024-11-18 18:04:18.426961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.947  [2024-11-18T18:04:18.811Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:00.207 00:06:00.207 18:04:18 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:00.207 18:04:18 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:00.207 18:04:18 -- dd/basic_rw.sh@23 -- # count=7 00:06:00.207 18:04:18 -- dd/basic_rw.sh@24 -- # count=7 00:06:00.207 18:04:18 -- dd/basic_rw.sh@25 -- # size=57344 00:06:00.207 18:04:18 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:00.207 18:04:18 -- dd/common.sh@98 -- # xtrace_disable 00:06:00.207 18:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 18:04:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:00.776 18:04:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:00.776 18:04:19 -- dd/common.sh@31 -- # xtrace_disable 00:06:00.776 18:04:19 -- common/autotest_common.sh@10 -- # set +x 00:06:00.776 [2024-11-18 18:04:19.202176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.776 [2024-11-18 18:04:19.202440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57821 ] 00:06:00.776 { 00:06:00.776 "subsystems": [ 00:06:00.776 { 00:06:00.776 "subsystem": "bdev", 00:06:00.776 "config": [ 00:06:00.776 { 00:06:00.776 "params": { 00:06:00.776 "trtype": "pcie", 00:06:00.776 "traddr": "0000:00:06.0", 00:06:00.776 "name": "Nvme0" 00:06:00.776 }, 00:06:00.776 "method": "bdev_nvme_attach_controller" 00:06:00.776 }, 00:06:00.776 { 00:06:00.776 "method": "bdev_wait_for_examine" 00:06:00.776 } 00:06:00.776 ] 00:06:00.776 } 00:06:00.776 ] 00:06:00.776 } 00:06:00.776 [2024-11-18 18:04:19.337975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.036 [2024-11-18 18:04:19.385845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.036  [2024-11-18T18:04:19.899Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:01.295 00:06:01.295 18:04:19 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:01.295 18:04:19 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:01.295 18:04:19 -- dd/common.sh@31 -- # xtrace_disable 00:06:01.295 18:04:19 -- common/autotest_common.sh@10 -- # set +x 00:06:01.295 [2024-11-18 18:04:19.713351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.295 [2024-11-18 18:04:19.713635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57828 ] 00:06:01.295 { 00:06:01.295 "subsystems": [ 00:06:01.296 { 00:06:01.296 "subsystem": "bdev", 00:06:01.296 "config": [ 00:06:01.296 { 00:06:01.296 "params": { 00:06:01.296 "trtype": "pcie", 00:06:01.296 "traddr": "0000:00:06.0", 00:06:01.296 "name": "Nvme0" 00:06:01.296 }, 00:06:01.296 "method": "bdev_nvme_attach_controller" 00:06:01.296 }, 00:06:01.296 { 00:06:01.296 "method": "bdev_wait_for_examine" 00:06:01.296 } 00:06:01.296 ] 00:06:01.296 } 00:06:01.296 ] 00:06:01.296 } 00:06:01.296 [2024-11-18 18:04:19.850340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.555 [2024-11-18 18:04:19.900254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.555  [2024-11-18T18:04:20.418Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:01.814 00:06:01.815 18:04:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:01.815 18:04:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:01.815 18:04:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:01.815 18:04:20 -- dd/common.sh@11 -- # local nvme_ref= 00:06:01.815 18:04:20 -- dd/common.sh@12 -- # local size=57344 00:06:01.815 18:04:20 -- dd/common.sh@14 -- # local bs=1048576 00:06:01.815 18:04:20 -- dd/common.sh@15 -- # local count=1 00:06:01.815 18:04:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:01.815 18:04:20 -- dd/common.sh@18 -- # gen_conf 00:06:01.815 18:04:20 -- dd/common.sh@31 -- # xtrace_disable 00:06:01.815 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.815 [2024-11-18 18:04:20.240052] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.815 [2024-11-18 18:04:20.240140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57847 ] 00:06:01.815 { 00:06:01.815 "subsystems": [ 00:06:01.815 { 00:06:01.815 "subsystem": "bdev", 00:06:01.815 "config": [ 00:06:01.815 { 00:06:01.815 "params": { 00:06:01.815 "trtype": "pcie", 00:06:01.815 "traddr": "0000:00:06.0", 00:06:01.815 "name": "Nvme0" 00:06:01.815 }, 00:06:01.815 "method": "bdev_nvme_attach_controller" 00:06:01.815 }, 00:06:01.815 { 00:06:01.815 "method": "bdev_wait_for_examine" 00:06:01.815 } 00:06:01.815 ] 00:06:01.815 } 00:06:01.815 ] 00:06:01.815 } 00:06:01.815 [2024-11-18 18:04:20.377829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.074 [2024-11-18 18:04:20.425451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.074  [2024-11-18T18:04:20.937Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:02.333 00:06:02.333 18:04:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:02.333 18:04:20 -- dd/basic_rw.sh@23 -- # count=7 00:06:02.333 18:04:20 -- dd/basic_rw.sh@24 -- # count=7 00:06:02.333 18:04:20 -- dd/basic_rw.sh@25 -- # size=57344 00:06:02.333 18:04:20 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:02.333 18:04:20 -- dd/common.sh@98 -- # xtrace_disable 00:06:02.333 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.593 18:04:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:02.593 18:04:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:02.593 18:04:21 -- dd/common.sh@31 -- # xtrace_disable 00:06:02.593 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:06:02.852 [2024-11-18 18:04:21.211608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.852 [2024-11-18 18:04:21.211699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57865 ] 00:06:02.852 { 00:06:02.852 "subsystems": [ 00:06:02.852 { 00:06:02.852 "subsystem": "bdev", 00:06:02.852 "config": [ 00:06:02.852 { 00:06:02.852 "params": { 00:06:02.852 "trtype": "pcie", 00:06:02.852 "traddr": "0000:00:06.0", 00:06:02.852 "name": "Nvme0" 00:06:02.852 }, 00:06:02.852 "method": "bdev_nvme_attach_controller" 00:06:02.852 }, 00:06:02.852 { 00:06:02.852 "method": "bdev_wait_for_examine" 00:06:02.852 } 00:06:02.852 ] 00:06:02.852 } 00:06:02.852 ] 00:06:02.852 } 00:06:02.852 [2024-11-18 18:04:21.349916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.852 [2024-11-18 18:04:21.399537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.112  [2024-11-18T18:04:21.716Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:03.112 00:06:03.112 18:04:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:03.112 18:04:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:03.112 18:04:21 -- dd/common.sh@31 -- # xtrace_disable 00:06:03.112 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:06:03.371 [2024-11-18 18:04:21.729298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.371 [2024-11-18 18:04:21.729575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57872 ] 00:06:03.371 { 00:06:03.371 "subsystems": [ 00:06:03.371 { 00:06:03.371 "subsystem": "bdev", 00:06:03.371 "config": [ 00:06:03.371 { 00:06:03.371 "params": { 00:06:03.371 "trtype": "pcie", 00:06:03.371 "traddr": "0000:00:06.0", 00:06:03.371 "name": "Nvme0" 00:06:03.371 }, 00:06:03.371 "method": "bdev_nvme_attach_controller" 00:06:03.371 }, 00:06:03.371 { 00:06:03.371 "method": "bdev_wait_for_examine" 00:06:03.371 } 00:06:03.371 ] 00:06:03.371 } 00:06:03.371 ] 00:06:03.371 } 00:06:03.371 [2024-11-18 18:04:21.865646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.371 [2024-11-18 18:04:21.912588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.630  [2024-11-18T18:04:22.234Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:03.630 00:06:03.630 18:04:22 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.630 18:04:22 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:03.630 18:04:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.630 18:04:22 -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.630 18:04:22 -- dd/common.sh@12 -- # local size=57344 00:06:03.630 18:04:22 -- dd/common.sh@14 -- # local bs=1048576 00:06:03.630 18:04:22 -- dd/common.sh@15 -- # local count=1 00:06:03.630 18:04:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.630 18:04:22 -- dd/common.sh@18 -- # gen_conf 00:06:03.630 18:04:22 -- dd/common.sh@31 -- # xtrace_disable 00:06:03.630 18:04:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.889 [2024-11-18 18:04:22.253955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.889 [2024-11-18 18:04:22.254056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57891 ] 00:06:03.889 { 00:06:03.889 "subsystems": [ 00:06:03.889 { 00:06:03.889 "subsystem": "bdev", 00:06:03.889 "config": [ 00:06:03.889 { 00:06:03.889 "params": { 00:06:03.889 "trtype": "pcie", 00:06:03.889 "traddr": "0000:00:06.0", 00:06:03.889 "name": "Nvme0" 00:06:03.889 }, 00:06:03.889 "method": "bdev_nvme_attach_controller" 00:06:03.889 }, 00:06:03.889 { 00:06:03.889 "method": "bdev_wait_for_examine" 00:06:03.889 } 00:06:03.889 ] 00:06:03.889 } 00:06:03.889 ] 00:06:03.889 } 00:06:03.889 [2024-11-18 18:04:22.390392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.889 [2024-11-18 18:04:22.436853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.149  [2024-11-18T18:04:22.753Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:04.149 00:06:04.149 18:04:22 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:04.149 18:04:22 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:04.149 18:04:22 -- dd/basic_rw.sh@23 -- # count=3 00:06:04.149 18:04:22 -- dd/basic_rw.sh@24 -- # count=3 00:06:04.149 18:04:22 -- dd/basic_rw.sh@25 -- # size=49152 00:06:04.149 18:04:22 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:04.149 18:04:22 -- dd/common.sh@98 -- # xtrace_disable 00:06:04.149 18:04:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.729 18:04:23 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:04.729 18:04:23 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:04.729 18:04:23 -- dd/common.sh@31 -- # xtrace_disable 00:06:04.729 18:04:23 -- common/autotest_common.sh@10 -- # set +x 00:06:04.729 [2024-11-18 18:04:23.154156] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.729 [2024-11-18 18:04:23.154247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57909 ] 00:06:04.729 { 00:06:04.729 "subsystems": [ 00:06:04.729 { 00:06:04.729 "subsystem": "bdev", 00:06:04.729 "config": [ 00:06:04.729 { 00:06:04.729 "params": { 00:06:04.729 "trtype": "pcie", 00:06:04.729 "traddr": "0000:00:06.0", 00:06:04.729 "name": "Nvme0" 00:06:04.729 }, 00:06:04.729 "method": "bdev_nvme_attach_controller" 00:06:04.729 }, 00:06:04.729 { 00:06:04.729 "method": "bdev_wait_for_examine" 00:06:04.729 } 00:06:04.729 ] 00:06:04.729 } 00:06:04.729 ] 00:06:04.729 } 00:06:04.729 [2024-11-18 18:04:23.292567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.000 [2024-11-18 18:04:23.346891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.000  [2024-11-18T18:04:23.863Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:05.259 00:06:05.259 18:04:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:05.259 18:04:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:05.259 18:04:23 -- dd/common.sh@31 -- # xtrace_disable 00:06:05.259 18:04:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 [2024-11-18 18:04:23.673116] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.259 [2024-11-18 18:04:23.674095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:06:05.259 { 00:06:05.259 "subsystems": [ 00:06:05.259 { 00:06:05.259 "subsystem": "bdev", 00:06:05.259 "config": [ 00:06:05.259 { 00:06:05.259 "params": { 00:06:05.259 "trtype": "pcie", 00:06:05.259 "traddr": "0000:00:06.0", 00:06:05.259 "name": "Nvme0" 00:06:05.259 }, 00:06:05.259 "method": "bdev_nvme_attach_controller" 00:06:05.259 }, 00:06:05.259 { 00:06:05.259 "method": "bdev_wait_for_examine" 00:06:05.259 } 00:06:05.259 ] 00:06:05.259 } 00:06:05.259 ] 00:06:05.259 } 00:06:05.259 [2024-11-18 18:04:23.811385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.518 [2024-11-18 18:04:23.864007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.518  [2024-11-18T18:04:24.381Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:05.777 00:06:05.777 18:04:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.777 18:04:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:05.777 18:04:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.777 18:04:24 -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.777 18:04:24 -- dd/common.sh@12 -- # local size=49152 00:06:05.777 18:04:24 -- dd/common.sh@14 -- # local bs=1048576 00:06:05.777 18:04:24 -- dd/common.sh@15 -- # local count=1 00:06:05.777 18:04:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.777 18:04:24 -- dd/common.sh@18 -- # gen_conf 00:06:05.777 18:04:24 -- dd/common.sh@31 -- # xtrace_disable 00:06:05.777 18:04:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.777 [2024-11-18 18:04:24.214439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.777 [2024-11-18 18:04:24.214752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57935 ] 00:06:05.777 { 00:06:05.777 "subsystems": [ 00:06:05.777 { 00:06:05.777 "subsystem": "bdev", 00:06:05.777 "config": [ 00:06:05.777 { 00:06:05.777 "params": { 00:06:05.777 "trtype": "pcie", 00:06:05.777 "traddr": "0000:00:06.0", 00:06:05.777 "name": "Nvme0" 00:06:05.777 }, 00:06:05.777 "method": "bdev_nvme_attach_controller" 00:06:05.777 }, 00:06:05.777 { 00:06:05.777 "method": "bdev_wait_for_examine" 00:06:05.777 } 00:06:05.777 ] 00:06:05.777 } 00:06:05.777 ] 00:06:05.777 } 00:06:05.777 [2024-11-18 18:04:24.353576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.036 [2024-11-18 18:04:24.401538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.036  [2024-11-18T18:04:24.900Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:06.296 00:06:06.296 18:04:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:06.296 18:04:24 -- dd/basic_rw.sh@23 -- # count=3 00:06:06.296 18:04:24 -- dd/basic_rw.sh@24 -- # count=3 00:06:06.296 18:04:24 -- dd/basic_rw.sh@25 -- # size=49152 00:06:06.296 18:04:24 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:06.296 18:04:24 -- dd/common.sh@98 -- # xtrace_disable 00:06:06.296 18:04:24 -- common/autotest_common.sh@10 -- # set +x 00:06:06.555 18:04:25 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:06.555 18:04:25 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.555 18:04:25 -- dd/common.sh@31 -- # xtrace_disable 00:06:06.555 18:04:25 -- common/autotest_common.sh@10 -- # set +x 00:06:06.555 [2024-11-18 18:04:25.130212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.556 [2024-11-18 18:04:25.130573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57949 ] 00:06:06.556 { 00:06:06.556 "subsystems": [ 00:06:06.556 { 00:06:06.556 "subsystem": "bdev", 00:06:06.556 "config": [ 00:06:06.556 { 00:06:06.556 "params": { 00:06:06.556 "trtype": "pcie", 00:06:06.556 "traddr": "0000:00:06.0", 00:06:06.556 "name": "Nvme0" 00:06:06.556 }, 00:06:06.556 "method": "bdev_nvme_attach_controller" 00:06:06.556 }, 00:06:06.556 { 00:06:06.556 "method": "bdev_wait_for_examine" 00:06:06.556 } 00:06:06.556 ] 00:06:06.556 } 00:06:06.556 ] 00:06:06.556 } 00:06:06.815 [2024-11-18 18:04:25.273347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.815 [2024-11-18 18:04:25.321473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.074  [2024-11-18T18:04:25.678Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:07.074 00:06:07.074 18:04:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:07.074 18:04:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.074 18:04:25 -- dd/common.sh@31 -- # xtrace_disable 00:06:07.074 18:04:25 -- common/autotest_common.sh@10 -- # set +x 00:06:07.075 [2024-11-18 18:04:25.647375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.075 [2024-11-18 18:04:25.648190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:06:07.075 { 00:06:07.075 "subsystems": [ 00:06:07.075 { 00:06:07.075 "subsystem": "bdev", 00:06:07.075 "config": [ 00:06:07.075 { 00:06:07.075 "params": { 00:06:07.075 "trtype": "pcie", 00:06:07.075 "traddr": "0000:00:06.0", 00:06:07.075 "name": "Nvme0" 00:06:07.075 }, 00:06:07.075 "method": "bdev_nvme_attach_controller" 00:06:07.075 }, 00:06:07.075 { 00:06:07.075 "method": "bdev_wait_for_examine" 00:06:07.075 } 00:06:07.075 ] 00:06:07.075 } 00:06:07.075 ] 00:06:07.075 } 00:06:07.334 [2024-11-18 18:04:25.786003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.334 [2024-11-18 18:04:25.833296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.593  [2024-11-18T18:04:26.197Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:07.593 00:06:07.593 18:04:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.593 18:04:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:07.593 18:04:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.593 18:04:26 -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.593 18:04:26 -- dd/common.sh@12 -- # local size=49152 00:06:07.593 18:04:26 -- dd/common.sh@14 -- # local bs=1048576 00:06:07.593 18:04:26 -- dd/common.sh@15 -- # local count=1 00:06:07.593 18:04:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.593 18:04:26 -- dd/common.sh@18 -- # gen_conf 00:06:07.593 18:04:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:07.593 18:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.593 [2024-11-18 18:04:26.184171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.593 [2024-11-18 18:04:26.184259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57979 ] 00:06:07.853 { 00:06:07.853 "subsystems": [ 00:06:07.853 { 00:06:07.853 "subsystem": "bdev", 00:06:07.853 "config": [ 00:06:07.853 { 00:06:07.853 "params": { 00:06:07.853 "trtype": "pcie", 00:06:07.853 "traddr": "0000:00:06.0", 00:06:07.853 "name": "Nvme0" 00:06:07.853 }, 00:06:07.853 "method": "bdev_nvme_attach_controller" 00:06:07.853 }, 00:06:07.853 { 00:06:07.853 "method": "bdev_wait_for_examine" 00:06:07.853 } 00:06:07.853 ] 00:06:07.853 } 00:06:07.853 ] 00:06:07.853 } 00:06:07.853 [2024-11-18 18:04:26.317057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.853 [2024-11-18 18:04:26.363510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.112  [2024-11-18T18:04:26.716Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.112 00:06:08.112 ************************************ 00:06:08.112 END TEST dd_rw 00:06:08.112 ************************************ 00:06:08.112 00:06:08.112 real 0m12.108s 00:06:08.112 user 0m9.028s 00:06:08.112 sys 0m2.056s 00:06:08.112 18:04:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.112 18:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.112 18:04:26 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:08.112 18:04:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.112 18:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.112 18:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.112 ************************************ 00:06:08.112 START TEST dd_rw_offset 00:06:08.112 ************************************ 00:06:08.112 18:04:26 -- common/autotest_common.sh@1114 -- # basic_offset 00:06:08.112 18:04:26 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:08.112 18:04:26 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:08.112 18:04:26 -- dd/common.sh@98 -- # xtrace_disable 00:06:08.112 18:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.372 18:04:26 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:08.372 18:04:26 -- dd/basic_rw.sh@56 -- # data=w56oestm205232hjajshedmrse8yso9hy56zno7eaeillkrrx2srvlpuy44wsxifc08t2kwua9rfaqgn2in35bee9fj9dr771h9mvaz2ydhpme8sixng8sw1eo4ivmpcqr4nnmh5mqzl1if5f4rpghk1cr1xaxkf8eno6x5fxmela6mn2m79lzlxi19wnumszq3gz72sscixx86ynlz5qpyfe97xssdzyb4fgd3zytgy3t7wewllsap41zikil051eyxdkl8b860kxna33jmwfw1ttxhpo2otp3jemeh9djsn08nrzb9xbsucqhbcsb1p2sjmpairdou402tfllscki9zfjjeqcs97evlnbf8dfi44obz1qh668rnfepgye5vow8pg6b1lskid3o4zsmq3fjv71wamzp14t3wzxq2ttbtwci9ruayfu7gv7rr57qy5e4jz0si2vc41rhqhb436syo74945fn6ozjb24uqu7qsxxnkcutly59erhl7m9qk5qpu7wks3w2eipf3sc0yymni3q6bb0f8rx1kjfmtskfhttsqqs35zygrbcoffyzq7ut63ss11w7q8679yltd60cqtdqcgq3xyyed4jv6i066trstxz7ens59xp5dyknjg96d4w61yfqqcj9e41d9i05zzwqtw0hwrjxqb82agsuif5mdhgsd3k2ussafzjhyod561ks44na13cj2mhmncrxqej5il9op2wxhazsif8paj2tpeer53dj03g6hx4ubtpvkumcv08rgju3suuh09e0sn3c2j9hroq4t0w6nmmew08gnt1u7v05mcoabkeof28l79dr0o7nnbs78k1nrwgwlbtu1p9dbv5b2legov999yf5mnzwzgupqr6qa51q1o9mnkm52ujfu6s3qakvg2o4topadya8etenxwwet75gtu92y6r3hlr0cspos13g2t6u5kh8yg8uk1w0dqvhbv6u38l9hllo8qzop99oal8oxlisf8251cvaiu2p8hrp4x3f3lachlfvnt6tfajql4eb6ejqtt7fp7wf2oa1w8uhl1y4d9gtpt74ff6p7wbsipo0vxqmx9jr2nhkmc72hovedvie1myfxlf7uvxv8i95il9xhtunbjfmlurmahfb3l5ysx5bosb5ox6b99rmc1wtju8rpp690hit4bnucvrwprj98p32b4lsuxu3o0hyjeka47pd7aaa8w2rm0luar9skgmi8demmeph7r8e8cuwo6x3sb0ik8t0a1zf49vsjgol76t3lq858qr7kb334r5wv1v2ga4h950kpt6govrwhegl1x8wp40vchif0fhas5rglbhnxc6a6pzfpbawq3ep6qgouq1r78f8a053ic34h4w663wa1p329kq9e7qgqmiz4lsv27pgul7xunfyumrdau2p0ri40snloenw86gkl3zr2n8zao3zhycvzysioqh5h8feth682i99w062m5fr29vvv1865c5wtpkkugyjmxatz3bd7x6cezpdh4s9bqk1jnw6w43k3q0jbjp54eyvqazbzakd87psqec08h78m2ifh42jpfjq9tj2lg16rnbqgua6s4ox8beihvoud6az14e0cy65o28vbsw7mkyynobodofmrve45dpfvquxdiii99mioevq7vhxmxkt6kmnzs9oa60igo7r5cqepouhwgzyb2naqoreexa4miotbfpe4mkkm0ba1d8m32qnzh0dwbl0uq9b5j72ob8o4j356jkka9wx3f175bs8c14z8qrdox111oriblgojcwbrf39zpa4l5f9n631yii8azmfcxpy2o5e5qmqjhkwp3jzdh3v10lkkwo6sd8c8lwl610zwxijllhqqnv3qczqzdmqf1em3degw2yb89qtpizfum54d645i5ll1bfces7yd7at0v7ifjy2kf9xohb438rt9z0tv81qilqhq3wtuf7ah6rlg5pnkntfbhbl3j0v2mhatmk1s30vuk7pu6k3eblg1gm4clumjug3u055iqhq0mxxpcdd5rxoqm6ply4pophr3487282jjo0njxbblp7i5w6pb3j95zb0wepz8t7zu0zf411dml65sok2e11weqpgx4xgdkkfcovcbzqiiztcipa8s0uz16vaypz66nuidw5nsjcp3vqne5mcm5kph6xfiu7iifdkkwu9uhmecu1bcjcc4xxr18d42xiwiaov1jbc68vabp0zg94qgpy56w625e07b8dd8r8n6qgc1yoi062w69aza33y1scvvzt2qlb2wi0y7x3uu8zksb3y8ez0lu58236op3tz3ts3guvjd329g2llqhxiwwvn4123l68v2lc9ufem967sbab1cadibmvjtj4l6lfu1r4m6723dgimvwq41v8wfype11pylgpich8rmtel1xdzx6fho9ahix6cct2o1b4siwy9g1v2f4dx10x7g0r08pdxvdsdztqwbpucljs08afnr9lyeyfo8nl8yhzlatn6atqk33i9n86jea7k7u2zdukv27dzniw86v0nc00b62ndp8ug4j4ebpcztot6nixct5xlcm355g8qfdedzr05gmnmg9b12zo54ypn1zvu4vusllekeszm6h0bmhv7mugfex2nsfitzoegss9557679kt5h8lq1vpd95h6wmsyd73iu79bkf64ogk7m640f49lnwo7jo9o98falvlae6n7z2q2evhnz71oc9zd6elqobas5fad455sudp8x9y6eabxrzp70ds9nyy4p2sc5omwdbth4cznx5n6svb0is3jla6gxo1a9ctm73z8u60kqut92zzlgv7emkyjmc58bu6qkyuj7ntnda081r609dyvjtz50a73r119b389n38gal2ny7z4b3azgyp3jh5xm6ytr62doyg6e14vbytkv8vd1xywv374v8e06bz0slgg4cpko2f2qil4yua5mx7vms4sl3qg8r0tz8ku00e7kcr7g3fn357rzmoe4wjo858rhhbkawpvo1ijd0s5tr86c8h89udjeokfw5xsqqpfbdqnmzg7hq6ugcmteeoea28kbd7e6hgd9b3dluy1o7k29zzertx38aidxv10riz6bu4ni6chvp12j5tecpdnhwb24c6aucscq9kg462173brjhc06mcjabm9li0a1ii33t5h4z12bd5t631wxgrmzyeatprszlk4p4ktev8tak9789f1ag66r9fehstzth0avu4p57l65hatyw0vkuym2e7ma15h44z8r4xey790wa467k8abbnmkgadhscm5wsaoj90y2kna5q81xarwppkhppxf1dmjghhkrrxxakxqz43klxssxp9dzyf7dz005i6efddjv7mn4xfbi4dvjj17s3kpkaftyd6ne0pwiomwl8tfebp00cvext3evinfm0156tu5aocti15kqqvlmuznb4g28f934x3q00sgzcxqgx8myv6bu2qekfswedauj1au3erc1zsy2r0xa0k18857r0gy49nqyho2a7lxppqsgbnd5q67rrsqmblj6vc3oddksaaaq7iakk131s8tkf4cx0uq2ur1gdpmvpc9idxrapws7333frpxsqvdlnh4ozo5aa2iersy72v64idcxu198pcnqp1fdv2zhgx8n3vav9uffqy9hftrqrca466zv7hydu1k3esd9bz6cmib5u7322rz9h59njxx4e08m8wmwbktdgz4e12ethl6hx7fi75sn3dzsiv1aygea4eobmwejwdtf77x0qoxjw9mx7n8kjg3yybmr0x7h7cgyr3321skdfi0257dzz0p5vfnifv2qd7dl77fthxlvg4r7zjqm1nxn51x5026ogfr7uvxb0gvt3nhvtoqq8n2rx2btv5n5798u29ww7gagt4pl9v0w1z15x45pt68n4mg6e1q68oh45lw5o974btcbcpr2eewyxbjw2zwticccbssh2ut837xyx03jf7lipm32v9dxtpucwon9g5ijdnqt4l93vavbqaprahahyjuystkr7uqq87y075uq5l42wu 00:06:08.372 18:04:26 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:08.372 18:04:26 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:08.372 18:04:26 -- dd/common.sh@31 -- # xtrace_disable 00:06:08.372 18:04:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.372 [2024-11-18 18:04:26.803002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.372 [2024-11-18 18:04:26.803315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:06:08.372 { 00:06:08.372 "subsystems": [ 00:06:08.372 { 00:06:08.372 "subsystem": "bdev", 00:06:08.372 "config": [ 00:06:08.372 { 00:06:08.372 "params": { 00:06:08.372 "trtype": "pcie", 00:06:08.372 "traddr": "0000:00:06.0", 00:06:08.372 "name": "Nvme0" 00:06:08.372 }, 00:06:08.372 "method": "bdev_nvme_attach_controller" 00:06:08.372 }, 00:06:08.372 { 00:06:08.372 "method": "bdev_wait_for_examine" 00:06:08.372 } 00:06:08.372 ] 00:06:08.372 } 00:06:08.372 ] 00:06:08.372 } 00:06:08.372 [2024-11-18 18:04:26.945846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.632 [2024-11-18 18:04:27.000170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.632  [2024-11-18T18:04:27.495Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:08.891 00:06:08.891 18:04:27 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:08.891 18:04:27 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:08.891 18:04:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:08.891 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.891 [2024-11-18 18:04:27.337647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.891 [2024-11-18 18:04:27.337773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:06:08.891 { 00:06:08.891 "subsystems": [ 00:06:08.891 { 00:06:08.891 "subsystem": "bdev", 00:06:08.891 "config": [ 00:06:08.891 { 00:06:08.891 "params": { 00:06:08.891 "trtype": "pcie", 00:06:08.891 "traddr": "0000:00:06.0", 00:06:08.891 "name": "Nvme0" 00:06:08.891 }, 00:06:08.891 "method": "bdev_nvme_attach_controller" 00:06:08.891 }, 00:06:08.891 { 00:06:08.891 "method": "bdev_wait_for_examine" 00:06:08.891 } 00:06:08.891 ] 00:06:08.891 } 00:06:08.891 ] 00:06:08.891 } 00:06:08.891 [2024-11-18 18:04:27.473848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.150 [2024-11-18 18:04:27.521622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.150  [2024-11-18T18:04:28.015Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:09.411 00:06:09.411 18:04:27 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:09.411 ************************************ 00:06:09.411 END TEST dd_rw_offset 00:06:09.411 ************************************ 00:06:09.412 18:04:27 -- dd/basic_rw.sh@72 -- # [[ w56oestm205232hjajshedmrse8yso9hy56zno7eaeillkrrx2srvlpuy44wsxifc08t2kwua9rfaqgn2in35bee9fj9dr771h9mvaz2ydhpme8sixng8sw1eo4ivmpcqr4nnmh5mqzl1if5f4rpghk1cr1xaxkf8eno6x5fxmela6mn2m79lzlxi19wnumszq3gz72sscixx86ynlz5qpyfe97xssdzyb4fgd3zytgy3t7wewllsap41zikil051eyxdkl8b860kxna33jmwfw1ttxhpo2otp3jemeh9djsn08nrzb9xbsucqhbcsb1p2sjmpairdou402tfllscki9zfjjeqcs97evlnbf8dfi44obz1qh668rnfepgye5vow8pg6b1lskid3o4zsmq3fjv71wamzp14t3wzxq2ttbtwci9ruayfu7gv7rr57qy5e4jz0si2vc41rhqhb436syo74945fn6ozjb24uqu7qsxxnkcutly59erhl7m9qk5qpu7wks3w2eipf3sc0yymni3q6bb0f8rx1kjfmtskfhttsqqs35zygrbcoffyzq7ut63ss11w7q8679yltd60cqtdqcgq3xyyed4jv6i066trstxz7ens59xp5dyknjg96d4w61yfqqcj9e41d9i05zzwqtw0hwrjxqb82agsuif5mdhgsd3k2ussafzjhyod561ks44na13cj2mhmncrxqej5il9op2wxhazsif8paj2tpeer53dj03g6hx4ubtpvkumcv08rgju3suuh09e0sn3c2j9hroq4t0w6nmmew08gnt1u7v05mcoabkeof28l79dr0o7nnbs78k1nrwgwlbtu1p9dbv5b2legov999yf5mnzwzgupqr6qa51q1o9mnkm52ujfu6s3qakvg2o4topadya8etenxwwet75gtu92y6r3hlr0cspos13g2t6u5kh8yg8uk1w0dqvhbv6u38l9hllo8qzop99oal8oxlisf8251cvaiu2p8hrp4x3f3lachlfvnt6tfajql4eb6ejqtt7fp7wf2oa1w8uhl1y4d9gtpt74ff6p7wbsipo0vxqmx9jr2nhkmc72hovedvie1myfxlf7uvxv8i95il9xhtunbjfmlurmahfb3l5ysx5bosb5ox6b99rmc1wtju8rpp690hit4bnucvrwprj98p32b4lsuxu3o0hyjeka47pd7aaa8w2rm0luar9skgmi8demmeph7r8e8cuwo6x3sb0ik8t0a1zf49vsjgol76t3lq858qr7kb334r5wv1v2ga4h950kpt6govrwhegl1x8wp40vchif0fhas5rglbhnxc6a6pzfpbawq3ep6qgouq1r78f8a053ic34h4w663wa1p329kq9e7qgqmiz4lsv27pgul7xunfyumrdau2p0ri40snloenw86gkl3zr2n8zao3zhycvzysioqh5h8feth682i99w062m5fr29vvv1865c5wtpkkugyjmxatz3bd7x6cezpdh4s9bqk1jnw6w43k3q0jbjp54eyvqazbzakd87psqec08h78m2ifh42jpfjq9tj2lg16rnbqgua6s4ox8beihvoud6az14e0cy65o28vbsw7mkyynobodofmrve45dpfvquxdiii99mioevq7vhxmxkt6kmnzs9oa60igo7r5cqepouhwgzyb2naqoreexa4miotbfpe4mkkm0ba1d8m32qnzh0dwbl0uq9b5j72ob8o4j356jkka9wx3f175bs8c14z8qrdox111oriblgojcwbrf39zpa4l5f9n631yii8azmfcxpy2o5e5qmqjhkwp3jzdh3v10lkkwo6sd8c8lwl610zwxijllhqqnv3qczqzdmqf1em3degw2yb89qtpizfum54d645i5ll1bfces7yd7at0v7ifjy2kf9xohb438rt9z0tv81qilqhq3wtuf7ah6rlg5pnkntfbhbl3j0v2mhatmk1s30vuk7pu6k3eblg1gm4clumjug3u055iqhq0mxxpcdd5rxoqm6ply4pophr3487282jjo0njxbblp7i5w6pb3j95zb0wepz8t7zu0zf411dml65sok2e11weqpgx4xgdkkfcovcbzqiiztcipa8s0uz16vaypz66nuidw5nsjcp3vqne5mcm5kph6xfiu7iifdkkwu9uhmecu1bcjcc4xxr18d42xiwiaov1jbc68vabp0zg94qgpy56w625e07b8dd8r8n6qgc1yoi062w69aza33y1scvvzt2qlb2wi0y7x3uu8zksb3y8ez0lu58236op3tz3ts3guvjd329g2llqhxiwwvn4123l68v2lc9ufem967sbab1cadibmvjtj4l6lfu1r4m6723dgimvwq41v8wfype11pylgpich8rmtel1xdzx6fho9ahix6cct2o1b4siwy9g1v2f4dx10x7g0r08pdxvdsdztqwbpucljs08afnr9lyeyfo8nl8yhzlatn6atqk33i9n86jea7k7u2zdukv27dzniw86v0nc00b62ndp8ug4j4ebpcztot6nixct5xlcm355g8qfdedzr05gmnmg9b12zo54ypn1zvu4vusllekeszm6h0bmhv7mugfex2nsfitzoegss9557679kt5h8lq1vpd95h6wmsyd73iu79bkf64ogk7m640f49lnwo7jo9o98falvlae6n7z2q2evhnz71oc9zd6elqobas5fad455sudp8x9y6eabxrzp70ds9nyy4p2sc5omwdbth4cznx5n6svb0is3jla6gxo1a9ctm73z8u60kqut92zzlgv7emkyjmc58bu6qkyuj7ntnda081r609dyvjtz50a73r119b389n38gal2ny7z4b3azgyp3jh5xm6ytr62doyg6e14vbytkv8vd1xywv374v8e06bz0slgg4cpko2f2qil4yua5mx7vms4sl3qg8r0tz8ku00e7kcr7g3fn357rzmoe4wjo858rhhbkawpvo1ijd0s5tr86c8h89udjeokfw5xsqqpfbdqnmzg7hq6ugcmteeoea28kbd7e6hgd9b3dluy1o7k29zzertx38aidxv10riz6bu4ni6chvp12j5tecpdnhwb24c6aucscq9kg462173brjhc06mcjabm9li0a1ii33t5h4z12bd5t631wxgrmzyeatprszlk4p4ktev8tak9789f1ag66r9fehstzth0avu4p57l65hatyw0vkuym2e7ma15h44z8r4xey790wa467k8abbnmkgadhscm5wsaoj90y2kna5q81xarwppkhppxf1dmjghhkrrxxakxqz43klxssxp9dzyf7dz005i6efddjv7mn4xfbi4dvjj17s3kpkaftyd6ne0pwiomwl8tfebp00cvext3evinfm0156tu5aocti15kqqvlmuznb4g28f934x3q00sgzcxqgx8myv6bu2qekfswedauj1au3erc1zsy2r0xa0k18857r0gy49nqyho2a7lxppqsgbnd5q67rrsqmblj6vc3oddksaaaq7iakk131s8tkf4cx0uq2ur1gdpmvpc9idxrapws7333frpxsqvdlnh4ozo5aa2iersy72v64idcxu198pcnqp1fdv2zhgx8n3vav9uffqy9hftrqrca466zv7hydu1k3esd9bz6cmib5u7322rz9h59njxx4e08m8wmwbktdgz4e12ethl6hx7fi75sn3dzsiv1aygea4eobmwejwdtf77x0qoxjw9mx7n8kjg3yybmr0x7h7cgyr3321skdfi0257dzz0p5vfnifv2qd7dl77fthxlvg4r7zjqm1nxn51x5026ogfr7uvxb0gvt3nhvtoqq8n2rx2btv5n5798u29ww7gagt4pl9v0w1z15x45pt68n4mg6e1q68oh45lw5o974btcbcpr2eewyxbjw2zwticccbssh2ut837xyx03jf7lipm32v9dxtpucwon9g5ijdnqt4l93vavbqaprahahyjuystkr7uqq87y075uq5l42wu == \w\5\6\o\e\s\t\m\2\0\5\2\3\2\h\j\a\j\s\h\e\d\m\r\s\e\8\y\s\o\9\h\y\5\6\z\n\o\7\e\a\e\i\l\l\k\r\r\x\2\s\r\v\l\p\u\y\4\4\w\s\x\i\f\c\0\8\t\2\k\w\u\a\9\r\f\a\q\g\n\2\i\n\3\5\b\e\e\9\f\j\9\d\r\7\7\1\h\9\m\v\a\z\2\y\d\h\p\m\e\8\s\i\x\n\g\8\s\w\1\e\o\4\i\v\m\p\c\q\r\4\n\n\m\h\5\m\q\z\l\1\i\f\5\f\4\r\p\g\h\k\1\c\r\1\x\a\x\k\f\8\e\n\o\6\x\5\f\x\m\e\l\a\6\m\n\2\m\7\9\l\z\l\x\i\1\9\w\n\u\m\s\z\q\3\g\z\7\2\s\s\c\i\x\x\8\6\y\n\l\z\5\q\p\y\f\e\9\7\x\s\s\d\z\y\b\4\f\g\d\3\z\y\t\g\y\3\t\7\w\e\w\l\l\s\a\p\4\1\z\i\k\i\l\0\5\1\e\y\x\d\k\l\8\b\8\6\0\k\x\n\a\3\3\j\m\w\f\w\1\t\t\x\h\p\o\2\o\t\p\3\j\e\m\e\h\9\d\j\s\n\0\8\n\r\z\b\9\x\b\s\u\c\q\h\b\c\s\b\1\p\2\s\j\m\p\a\i\r\d\o\u\4\0\2\t\f\l\l\s\c\k\i\9\z\f\j\j\e\q\c\s\9\7\e\v\l\n\b\f\8\d\f\i\4\4\o\b\z\1\q\h\6\6\8\r\n\f\e\p\g\y\e\5\v\o\w\8\p\g\6\b\1\l\s\k\i\d\3\o\4\z\s\m\q\3\f\j\v\7\1\w\a\m\z\p\1\4\t\3\w\z\x\q\2\t\t\b\t\w\c\i\9\r\u\a\y\f\u\7\g\v\7\r\r\5\7\q\y\5\e\4\j\z\0\s\i\2\v\c\4\1\r\h\q\h\b\4\3\6\s\y\o\7\4\9\4\5\f\n\6\o\z\j\b\2\4\u\q\u\7\q\s\x\x\n\k\c\u\t\l\y\5\9\e\r\h\l\7\m\9\q\k\5\q\p\u\7\w\k\s\3\w\2\e\i\p\f\3\s\c\0\y\y\m\n\i\3\q\6\b\b\0\f\8\r\x\1\k\j\f\m\t\s\k\f\h\t\t\s\q\q\s\3\5\z\y\g\r\b\c\o\f\f\y\z\q\7\u\t\6\3\s\s\1\1\w\7\q\8\6\7\9\y\l\t\d\6\0\c\q\t\d\q\c\g\q\3\x\y\y\e\d\4\j\v\6\i\0\6\6\t\r\s\t\x\z\7\e\n\s\5\9\x\p\5\d\y\k\n\j\g\9\6\d\4\w\6\1\y\f\q\q\c\j\9\e\4\1\d\9\i\0\5\z\z\w\q\t\w\0\h\w\r\j\x\q\b\8\2\a\g\s\u\i\f\5\m\d\h\g\s\d\3\k\2\u\s\s\a\f\z\j\h\y\o\d\5\6\1\k\s\4\4\n\a\1\3\c\j\2\m\h\m\n\c\r\x\q\e\j\5\i\l\9\o\p\2\w\x\h\a\z\s\i\f\8\p\a\j\2\t\p\e\e\r\5\3\d\j\0\3\g\6\h\x\4\u\b\t\p\v\k\u\m\c\v\0\8\r\g\j\u\3\s\u\u\h\0\9\e\0\s\n\3\c\2\j\9\h\r\o\q\4\t\0\w\6\n\m\m\e\w\0\8\g\n\t\1\u\7\v\0\5\m\c\o\a\b\k\e\o\f\2\8\l\7\9\d\r\0\o\7\n\n\b\s\7\8\k\1\n\r\w\g\w\l\b\t\u\1\p\9\d\b\v\5\b\2\l\e\g\o\v\9\9\9\y\f\5\m\n\z\w\z\g\u\p\q\r\6\q\a\5\1\q\1\o\9\m\n\k\m\5\2\u\j\f\u\6\s\3\q\a\k\v\g\2\o\4\t\o\p\a\d\y\a\8\e\t\e\n\x\w\w\e\t\7\5\g\t\u\9\2\y\6\r\3\h\l\r\0\c\s\p\o\s\1\3\g\2\t\6\u\5\k\h\8\y\g\8\u\k\1\w\0\d\q\v\h\b\v\6\u\3\8\l\9\h\l\l\o\8\q\z\o\p\9\9\o\a\l\8\o\x\l\i\s\f\8\2\5\1\c\v\a\i\u\2\p\8\h\r\p\4\x\3\f\3\l\a\c\h\l\f\v\n\t\6\t\f\a\j\q\l\4\e\b\6\e\j\q\t\t\7\f\p\7\w\f\2\o\a\1\w\8\u\h\l\1\y\4\d\9\g\t\p\t\7\4\f\f\6\p\7\w\b\s\i\p\o\0\v\x\q\m\x\9\j\r\2\n\h\k\m\c\7\2\h\o\v\e\d\v\i\e\1\m\y\f\x\l\f\7\u\v\x\v\8\i\9\5\i\l\9\x\h\t\u\n\b\j\f\m\l\u\r\m\a\h\f\b\3\l\5\y\s\x\5\b\o\s\b\5\o\x\6\b\9\9\r\m\c\1\w\t\j\u\8\r\p\p\6\9\0\h\i\t\4\b\n\u\c\v\r\w\p\r\j\9\8\p\3\2\b\4\l\s\u\x\u\3\o\0\h\y\j\e\k\a\4\7\p\d\7\a\a\a\8\w\2\r\m\0\l\u\a\r\9\s\k\g\m\i\8\d\e\m\m\e\p\h\7\r\8\e\8\c\u\w\o\6\x\3\s\b\0\i\k\8\t\0\a\1\z\f\4\9\v\s\j\g\o\l\7\6\t\3\l\q\8\5\8\q\r\7\k\b\3\3\4\r\5\w\v\1\v\2\g\a\4\h\9\5\0\k\p\t\6\g\o\v\r\w\h\e\g\l\1\x\8\w\p\4\0\v\c\h\i\f\0\f\h\a\s\5\r\g\l\b\h\n\x\c\6\a\6\p\z\f\p\b\a\w\q\3\e\p\6\q\g\o\u\q\1\r\7\8\f\8\a\0\5\3\i\c\3\4\h\4\w\6\6\3\w\a\1\p\3\2\9\k\q\9\e\7\q\g\q\m\i\z\4\l\s\v\2\7\p\g\u\l\7\x\u\n\f\y\u\m\r\d\a\u\2\p\0\r\i\4\0\s\n\l\o\e\n\w\8\6\g\k\l\3\z\r\2\n\8\z\a\o\3\z\h\y\c\v\z\y\s\i\o\q\h\5\h\8\f\e\t\h\6\8\2\i\9\9\w\0\6\2\m\5\f\r\2\9\v\v\v\1\8\6\5\c\5\w\t\p\k\k\u\g\y\j\m\x\a\t\z\3\b\d\7\x\6\c\e\z\p\d\h\4\s\9\b\q\k\1\j\n\w\6\w\4\3\k\3\q\0\j\b\j\p\5\4\e\y\v\q\a\z\b\z\a\k\d\8\7\p\s\q\e\c\0\8\h\7\8\m\2\i\f\h\4\2\j\p\f\j\q\9\t\j\2\l\g\1\6\r\n\b\q\g\u\a\6\s\4\o\x\8\b\e\i\h\v\o\u\d\6\a\z\1\4\e\0\c\y\6\5\o\2\8\v\b\s\w\7\m\k\y\y\n\o\b\o\d\o\f\m\r\v\e\4\5\d\p\f\v\q\u\x\d\i\i\i\9\9\m\i\o\e\v\q\7\v\h\x\m\x\k\t\6\k\m\n\z\s\9\o\a\6\0\i\g\o\7\r\5\c\q\e\p\o\u\h\w\g\z\y\b\2\n\a\q\o\r\e\e\x\a\4\m\i\o\t\b\f\p\e\4\m\k\k\m\0\b\a\1\d\8\m\3\2\q\n\z\h\0\d\w\b\l\0\u\q\9\b\5\j\7\2\o\b\8\o\4\j\3\5\6\j\k\k\a\9\w\x\3\f\1\7\5\b\s\8\c\1\4\z\8\q\r\d\o\x\1\1\1\o\r\i\b\l\g\o\j\c\w\b\r\f\3\9\z\p\a\4\l\5\f\9\n\6\3\1\y\i\i\8\a\z\m\f\c\x\p\y\2\o\5\e\5\q\m\q\j\h\k\w\p\3\j\z\d\h\3\v\1\0\l\k\k\w\o\6\s\d\8\c\8\l\w\l\6\1\0\z\w\x\i\j\l\l\h\q\q\n\v\3\q\c\z\q\z\d\m\q\f\1\e\m\3\d\e\g\w\2\y\b\8\9\q\t\p\i\z\f\u\m\5\4\d\6\4\5\i\5\l\l\1\b\f\c\e\s\7\y\d\7\a\t\0\v\7\i\f\j\y\2\k\f\9\x\o\h\b\4\3\8\r\t\9\z\0\t\v\8\1\q\i\l\q\h\q\3\w\t\u\f\7\a\h\6\r\l\g\5\p\n\k\n\t\f\b\h\b\l\3\j\0\v\2\m\h\a\t\m\k\1\s\3\0\v\u\k\7\p\u\6\k\3\e\b\l\g\1\g\m\4\c\l\u\m\j\u\g\3\u\0\5\5\i\q\h\q\0\m\x\x\p\c\d\d\5\r\x\o\q\m\6\p\l\y\4\p\o\p\h\r\3\4\8\7\2\8\2\j\j\o\0\n\j\x\b\b\l\p\7\i\5\w\6\p\b\3\j\9\5\z\b\0\w\e\p\z\8\t\7\z\u\0\z\f\4\1\1\d\m\l\6\5\s\o\k\2\e\1\1\w\e\q\p\g\x\4\x\g\d\k\k\f\c\o\v\c\b\z\q\i\i\z\t\c\i\p\a\8\s\0\u\z\1\6\v\a\y\p\z\6\6\n\u\i\d\w\5\n\s\j\c\p\3\v\q\n\e\5\m\c\m\5\k\p\h\6\x\f\i\u\7\i\i\f\d\k\k\w\u\9\u\h\m\e\c\u\1\b\c\j\c\c\4\x\x\r\1\8\d\4\2\x\i\w\i\a\o\v\1\j\b\c\6\8\v\a\b\p\0\z\g\9\4\q\g\p\y\5\6\w\6\2\5\e\0\7\b\8\d\d\8\r\8\n\6\q\g\c\1\y\o\i\0\6\2\w\6\9\a\z\a\3\3\y\1\s\c\v\v\z\t\2\q\l\b\2\w\i\0\y\7\x\3\u\u\8\z\k\s\b\3\y\8\e\z\0\l\u\5\8\2\3\6\o\p\3\t\z\3\t\s\3\g\u\v\j\d\3\2\9\g\2\l\l\q\h\x\i\w\w\v\n\4\1\2\3\l\6\8\v\2\l\c\9\u\f\e\m\9\6\7\s\b\a\b\1\c\a\d\i\b\m\v\j\t\j\4\l\6\l\f\u\1\r\4\m\6\7\2\3\d\g\i\m\v\w\q\4\1\v\8\w\f\y\p\e\1\1\p\y\l\g\p\i\c\h\8\r\m\t\e\l\1\x\d\z\x\6\f\h\o\9\a\h\i\x\6\c\c\t\2\o\1\b\4\s\i\w\y\9\g\1\v\2\f\4\d\x\1\0\x\7\g\0\r\0\8\p\d\x\v\d\s\d\z\t\q\w\b\p\u\c\l\j\s\0\8\a\f\n\r\9\l\y\e\y\f\o\8\n\l\8\y\h\z\l\a\t\n\6\a\t\q\k\3\3\i\9\n\8\6\j\e\a\7\k\7\u\2\z\d\u\k\v\2\7\d\z\n\i\w\8\6\v\0\n\c\0\0\b\6\2\n\d\p\8\u\g\4\j\4\e\b\p\c\z\t\o\t\6\n\i\x\c\t\5\x\l\c\m\3\5\5\g\8\q\f\d\e\d\z\r\0\5\g\m\n\m\g\9\b\1\2\z\o\5\4\y\p\n\1\z\v\u\4\v\u\s\l\l\e\k\e\s\z\m\6\h\0\b\m\h\v\7\m\u\g\f\e\x\2\n\s\f\i\t\z\o\e\g\s\s\9\5\5\7\6\7\9\k\t\5\h\8\l\q\1\v\p\d\9\5\h\6\w\m\s\y\d\7\3\i\u\7\9\b\k\f\6\4\o\g\k\7\m\6\4\0\f\4\9\l\n\w\o\7\j\o\9\o\9\8\f\a\l\v\l\a\e\6\n\7\z\2\q\2\e\v\h\n\z\7\1\o\c\9\z\d\6\e\l\q\o\b\a\s\5\f\a\d\4\5\5\s\u\d\p\8\x\9\y\6\e\a\b\x\r\z\p\7\0\d\s\9\n\y\y\4\p\2\s\c\5\o\m\w\d\b\t\h\4\c\z\n\x\5\n\6\s\v\b\0\i\s\3\j\l\a\6\g\x\o\1\a\9\c\t\m\7\3\z\8\u\6\0\k\q\u\t\9\2\z\z\l\g\v\7\e\m\k\y\j\m\c\5\8\b\u\6\q\k\y\u\j\7\n\t\n\d\a\0\8\1\r\6\0\9\d\y\v\j\t\z\5\0\a\7\3\r\1\1\9\b\3\8\9\n\3\8\g\a\l\2\n\y\7\z\4\b\3\a\z\g\y\p\3\j\h\5\x\m\6\y\t\r\6\2\d\o\y\g\6\e\1\4\v\b\y\t\k\v\8\v\d\1\x\y\w\v\3\7\4\v\8\e\0\6\b\z\0\s\l\g\g\4\c\p\k\o\2\f\2\q\i\l\4\y\u\a\5\m\x\7\v\m\s\4\s\l\3\q\g\8\r\0\t\z\8\k\u\0\0\e\7\k\c\r\7\g\3\f\n\3\5\7\r\z\m\o\e\4\w\j\o\8\5\8\r\h\h\b\k\a\w\p\v\o\1\i\j\d\0\s\5\t\r\8\6\c\8\h\8\9\u\d\j\e\o\k\f\w\5\x\s\q\q\p\f\b\d\q\n\m\z\g\7\h\q\6\u\g\c\m\t\e\e\o\e\a\2\8\k\b\d\7\e\6\h\g\d\9\b\3\d\l\u\y\1\o\7\k\2\9\z\z\e\r\t\x\3\8\a\i\d\x\v\1\0\r\i\z\6\b\u\4\n\i\6\c\h\v\p\1\2\j\5\t\e\c\p\d\n\h\w\b\2\4\c\6\a\u\c\s\c\q\9\k\g\4\6\2\1\7\3\b\r\j\h\c\0\6\m\c\j\a\b\m\9\l\i\0\a\1\i\i\3\3\t\5\h\4\z\1\2\b\d\5\t\6\3\1\w\x\g\r\m\z\y\e\a\t\p\r\s\z\l\k\4\p\4\k\t\e\v\8\t\a\k\9\7\8\9\f\1\a\g\6\6\r\9\f\e\h\s\t\z\t\h\0\a\v\u\4\p\5\7\l\6\5\h\a\t\y\w\0\v\k\u\y\m\2\e\7\m\a\1\5\h\4\4\z\8\r\4\x\e\y\7\9\0\w\a\4\6\7\k\8\a\b\b\n\m\k\g\a\d\h\s\c\m\5\w\s\a\o\j\9\0\y\2\k\n\a\5\q\8\1\x\a\r\w\p\p\k\h\p\p\x\f\1\d\m\j\g\h\h\k\r\r\x\x\a\k\x\q\z\4\3\k\l\x\s\s\x\p\9\d\z\y\f\7\d\z\0\0\5\i\6\e\f\d\d\j\v\7\m\n\4\x\f\b\i\4\d\v\j\j\1\7\s\3\k\p\k\a\f\t\y\d\6\n\e\0\p\w\i\o\m\w\l\8\t\f\e\b\p\0\0\c\v\e\x\t\3\e\v\i\n\f\m\0\1\5\6\t\u\5\a\o\c\t\i\1\5\k\q\q\v\l\m\u\z\n\b\4\g\2\8\f\9\3\4\x\3\q\0\0\s\g\z\c\x\q\g\x\8\m\y\v\6\b\u\2\q\e\k\f\s\w\e\d\a\u\j\1\a\u\3\e\r\c\1\z\s\y\2\r\0\x\a\0\k\1\8\8\5\7\r\0\g\y\4\9\n\q\y\h\o\2\a\7\l\x\p\p\q\s\g\b\n\d\5\q\6\7\r\r\s\q\m\b\l\j\6\v\c\3\o\d\d\k\s\a\a\a\q\7\i\a\k\k\1\3\1\s\8\t\k\f\4\c\x\0\u\q\2\u\r\1\g\d\p\m\v\p\c\9\i\d\x\r\a\p\w\s\7\3\3\3\f\r\p\x\s\q\v\d\l\n\h\4\o\z\o\5\a\a\2\i\e\r\s\y\7\2\v\6\4\i\d\c\x\u\1\9\8\p\c\n\q\p\1\f\d\v\2\z\h\g\x\8\n\3\v\a\v\9\u\f\f\q\y\9\h\f\t\r\q\r\c\a\4\6\6\z\v\7\h\y\d\u\1\k\3\e\s\d\9\b\z\6\c\m\i\b\5\u\7\3\2\2\r\z\9\h\5\9\n\j\x\x\4\e\0\8\m\8\w\m\w\b\k\t\d\g\z\4\e\1\2\e\t\h\l\6\h\x\7\f\i\7\5\s\n\3\d\z\s\i\v\1\a\y\g\e\a\4\e\o\b\m\w\e\j\w\d\t\f\7\7\x\0\q\o\x\j\w\9\m\x\7\n\8\k\j\g\3\y\y\b\m\r\0\x\7\h\7\c\g\y\r\3\3\2\1\s\k\d\f\i\0\2\5\7\d\z\z\0\p\5\v\f\n\i\f\v\2\q\d\7\d\l\7\7\f\t\h\x\l\v\g\4\r\7\z\j\q\m\1\n\x\n\5\1\x\5\0\2\6\o\g\f\r\7\u\v\x\b\0\g\v\t\3\n\h\v\t\o\q\q\8\n\2\r\x\2\b\t\v\5\n\5\7\9\8\u\2\9\w\w\7\g\a\g\t\4\p\l\9\v\0\w\1\z\1\5\x\4\5\p\t\6\8\n\4\m\g\6\e\1\q\6\8\o\h\4\5\l\w\5\o\9\7\4\b\t\c\b\c\p\r\2\e\e\w\y\x\b\j\w\2\z\w\t\i\c\c\c\b\s\s\h\2\u\t\8\3\7\x\y\x\0\3\j\f\7\l\i\p\m\3\2\v\9\d\x\t\p\u\c\w\o\n\9\g\5\i\j\d\n\q\t\4\l\9\3\v\a\v\b\q\a\p\r\a\h\a\h\y\j\u\y\s\t\k\r\7\u\q\q\8\7\y\0\7\5\u\q\5\l\4\2\w\u ]] 00:06:09.412 00:06:09.412 real 0m1.099s 00:06:09.412 user 0m0.789s 00:06:09.412 sys 0m0.214s 00:06:09.412 18:04:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.412 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:09.412 18:04:27 -- dd/basic_rw.sh@1 -- # cleanup 00:06:09.412 18:04:27 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:09.412 18:04:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:09.412 18:04:27 -- dd/common.sh@11 -- # local nvme_ref= 00:06:09.412 18:04:27 -- dd/common.sh@12 -- # local size=0xffff 00:06:09.412 18:04:27 -- dd/common.sh@14 -- # local bs=1048576 00:06:09.412 18:04:27 -- dd/common.sh@15 -- # local count=1 00:06:09.412 18:04:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:09.412 18:04:27 -- dd/common.sh@18 -- # gen_conf 00:06:09.412 18:04:27 -- dd/common.sh@31 -- # xtrace_disable 00:06:09.412 18:04:27 -- common/autotest_common.sh@10 -- # set +x 00:06:09.412 [2024-11-18 18:04:27.886853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.412 [2024-11-18 18:04:27.886935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:06:09.412 { 00:06:09.412 "subsystems": [ 00:06:09.412 { 00:06:09.412 "subsystem": "bdev", 00:06:09.412 "config": [ 00:06:09.412 { 00:06:09.412 "params": { 00:06:09.412 "trtype": "pcie", 00:06:09.412 "traddr": "0000:00:06.0", 00:06:09.412 "name": "Nvme0" 00:06:09.412 }, 00:06:09.412 "method": "bdev_nvme_attach_controller" 00:06:09.412 }, 00:06:09.412 { 00:06:09.412 "method": "bdev_wait_for_examine" 00:06:09.412 } 00:06:09.412 ] 00:06:09.412 } 00:06:09.412 ] 00:06:09.412 } 00:06:09.671 [2024-11-18 18:04:28.023732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.671 [2024-11-18 18:04:28.077417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.671  [2024-11-18T18:04:28.534Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:09.930 00:06:09.930 18:04:28 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.930 00:06:09.930 real 0m14.773s 00:06:09.930 user 0m10.781s 00:06:09.930 sys 0m2.667s 00:06:09.930 18:04:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.930 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.930 ************************************ 00:06:09.930 END TEST spdk_dd_basic_rw 00:06:09.930 ************************************ 00:06:09.930 18:04:28 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:09.930 18:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.930 18:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.930 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.930 ************************************ 00:06:09.930 START TEST spdk_dd_posix 00:06:09.930 ************************************ 00:06:09.930 18:04:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:09.930 * Looking for test storage... 00:06:09.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:09.930 18:04:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.930 18:04:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.930 18:04:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:10.190 18:04:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:10.190 18:04:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:10.190 18:04:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:10.190 18:04:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:10.190 18:04:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:10.190 18:04:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:10.190 18:04:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.190 18:04:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:10.190 18:04:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:10.190 18:04:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:10.190 18:04:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:10.190 18:04:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:10.190 18:04:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:10.190 18:04:28 -- scripts/common.sh@344 -- # : 1 00:06:10.190 18:04:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:10.190 18:04:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.190 18:04:28 -- scripts/common.sh@364 -- # decimal 1 00:06:10.190 18:04:28 -- scripts/common.sh@352 -- # local d=1 00:06:10.190 18:04:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.190 18:04:28 -- scripts/common.sh@354 -- # echo 1 00:06:10.190 18:04:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:10.190 18:04:28 -- scripts/common.sh@365 -- # decimal 2 00:06:10.190 18:04:28 -- scripts/common.sh@352 -- # local d=2 00:06:10.190 18:04:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.190 18:04:28 -- scripts/common.sh@354 -- # echo 2 00:06:10.190 18:04:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:10.190 18:04:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:10.190 18:04:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:10.190 18:04:28 -- scripts/common.sh@367 -- # return 0 00:06:10.190 18:04:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.190 18:04:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.190 --rc genhtml_branch_coverage=1 00:06:10.190 --rc genhtml_function_coverage=1 00:06:10.190 --rc genhtml_legend=1 00:06:10.190 --rc geninfo_all_blocks=1 00:06:10.190 --rc geninfo_unexecuted_blocks=1 00:06:10.190 00:06:10.190 ' 00:06:10.190 18:04:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.190 --rc genhtml_branch_coverage=1 00:06:10.190 --rc genhtml_function_coverage=1 00:06:10.190 --rc genhtml_legend=1 00:06:10.190 --rc geninfo_all_blocks=1 00:06:10.190 --rc geninfo_unexecuted_blocks=1 00:06:10.190 00:06:10.190 ' 00:06:10.190 18:04:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.190 --rc genhtml_branch_coverage=1 00:06:10.190 --rc genhtml_function_coverage=1 00:06:10.190 --rc genhtml_legend=1 00:06:10.190 --rc geninfo_all_blocks=1 00:06:10.190 --rc geninfo_unexecuted_blocks=1 00:06:10.190 00:06:10.190 ' 00:06:10.190 18:04:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:10.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.190 --rc genhtml_branch_coverage=1 00:06:10.190 --rc genhtml_function_coverage=1 00:06:10.190 --rc genhtml_legend=1 00:06:10.190 --rc geninfo_all_blocks=1 00:06:10.190 --rc geninfo_unexecuted_blocks=1 00:06:10.190 00:06:10.190 ' 00:06:10.190 18:04:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.190 18:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.190 18:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.190 18:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.190 18:04:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.190 18:04:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.191 18:04:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.191 18:04:28 -- paths/export.sh@5 -- # export PATH 00:06:10.191 18:04:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.191 18:04:28 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:10.191 18:04:28 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:10.191 18:04:28 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:10.191 18:04:28 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:10.191 18:04:28 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.191 18:04:28 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.191 18:04:28 -- dd/posix.sh@130 -- # tests 00:06:10.191 18:04:28 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:10.191 * First test run, liburing in use 00:06:10.191 18:04:28 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:10.191 18:04:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.191 18:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.191 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.191 ************************************ 00:06:10.191 START TEST dd_flag_append 00:06:10.191 ************************************ 00:06:10.191 18:04:28 -- common/autotest_common.sh@1114 -- # append 00:06:10.191 18:04:28 -- dd/posix.sh@16 -- # local dump0 00:06:10.191 18:04:28 -- dd/posix.sh@17 -- # local dump1 00:06:10.191 18:04:28 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:10.191 18:04:28 -- dd/common.sh@98 -- # xtrace_disable 00:06:10.191 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.191 18:04:28 -- dd/posix.sh@19 -- # dump0=t8e5mh05w8wgfh87xtxlz26jyb2s5jvy 00:06:10.191 18:04:28 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:10.191 18:04:28 -- dd/common.sh@98 -- # xtrace_disable 00:06:10.191 18:04:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.191 18:04:28 -- dd/posix.sh@20 -- # dump1=70tcvhlg07ndhjjhb3qszjf5di465wfc 00:06:10.191 18:04:28 -- dd/posix.sh@22 -- # printf %s t8e5mh05w8wgfh87xtxlz26jyb2s5jvy 00:06:10.191 18:04:28 -- dd/posix.sh@23 -- # printf %s 70tcvhlg07ndhjjhb3qszjf5di465wfc 00:06:10.191 18:04:28 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:10.191 [2024-11-18 18:04:28.655288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.191 [2024-11-18 18:04:28.655384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58114 ] 00:06:10.450 [2024-11-18 18:04:28.794338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.450 [2024-11-18 18:04:28.842160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.450  [2024-11-18T18:04:29.054Z] Copying: 32/32 [B] (average 31 kBps) 00:06:10.450 00:06:10.450 18:04:29 -- dd/posix.sh@27 -- # [[ 70tcvhlg07ndhjjhb3qszjf5di465wfct8e5mh05w8wgfh87xtxlz26jyb2s5jvy == \7\0\t\c\v\h\l\g\0\7\n\d\h\j\j\h\b\3\q\s\z\j\f\5\d\i\4\6\5\w\f\c\t\8\e\5\m\h\0\5\w\8\w\g\f\h\8\7\x\t\x\l\z\2\6\j\y\b\2\s\5\j\v\y ]] 00:06:10.450 00:06:10.450 real 0m0.448s 00:06:10.450 user 0m0.240s 00:06:10.450 sys 0m0.086s 00:06:10.450 ************************************ 00:06:10.450 END TEST dd_flag_append 00:06:10.450 ************************************ 00:06:10.450 18:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.450 18:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.710 18:04:29 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:10.710 18:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.710 18:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.710 18:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.710 ************************************ 00:06:10.710 START TEST dd_flag_directory 00:06:10.710 ************************************ 00:06:10.710 18:04:29 -- common/autotest_common.sh@1114 -- # directory 00:06:10.710 18:04:29 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.710 18:04:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.710 18:04:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.710 18:04:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.710 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.710 18:04:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.710 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.710 18:04:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.710 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.710 18:04:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.710 18:04:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.710 18:04:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.710 [2024-11-18 18:04:29.148221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.710 [2024-11-18 18:04:29.148317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58141 ] 00:06:10.710 [2024-11-18 18:04:29.285575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.969 [2024-11-18 18:04:29.341118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.969 [2024-11-18 18:04:29.386929] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:10.969 [2024-11-18 18:04:29.386991] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:10.970 [2024-11-18 18:04:29.387018] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.970 [2024-11-18 18:04:29.442906] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:10.970 18:04:29 -- common/autotest_common.sh@653 -- # es=236 00:06:10.970 18:04:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.970 18:04:29 -- common/autotest_common.sh@662 -- # es=108 00:06:10.970 18:04:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.970 18:04:29 -- common/autotest_common.sh@670 -- # es=1 00:06:10.970 18:04:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.970 18:04:29 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:10.970 18:04:29 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.970 18:04:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:10.970 18:04:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.970 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.970 18:04:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.970 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.970 18:04:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.970 18:04:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.970 18:04:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.970 18:04:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.970 18:04:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:11.229 [2024-11-18 18:04:29.582174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.229 [2024-11-18 18:04:29.582252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:06:11.229 [2024-11-18 18:04:29.712654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.229 [2024-11-18 18:04:29.758809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.229 [2024-11-18 18:04:29.800756] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:11.229 [2024-11-18 18:04:29.800821] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:11.229 [2024-11-18 18:04:29.800849] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.488 [2024-11-18 18:04:29.858230] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:11.488 18:04:29 -- common/autotest_common.sh@653 -- # es=236 00:06:11.488 18:04:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.488 18:04:29 -- common/autotest_common.sh@662 -- # es=108 00:06:11.488 18:04:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.488 18:04:29 -- common/autotest_common.sh@670 -- # es=1 00:06:11.488 18:04:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.488 00:06:11.488 real 0m0.858s 00:06:11.488 user 0m0.475s 00:06:11.489 sys 0m0.175s 00:06:11.489 18:04:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.489 ************************************ 00:06:11.489 END TEST dd_flag_directory 00:06:11.489 ************************************ 00:06:11.489 18:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.489 18:04:29 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:11.489 18:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.489 18:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.489 18:04:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.489 ************************************ 00:06:11.489 START TEST dd_flag_nofollow 00:06:11.489 ************************************ 00:06:11.489 18:04:29 -- common/autotest_common.sh@1114 -- # nofollow 00:06:11.489 18:04:29 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:11.489 18:04:29 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:11.489 18:04:29 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:11.489 18:04:30 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:11.489 18:04:30 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.489 18:04:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.489 18:04:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.489 18:04:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.489 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.489 18:04:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.489 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.489 18:04:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.489 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.489 18:04:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.489 18:04:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:11.489 18:04:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.489 [2024-11-18 18:04:30.058348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.489 [2024-11-18 18:04:30.058459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:06:11.748 [2024-11-18 18:04:30.194719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.748 [2024-11-18 18:04:30.242233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.748 [2024-11-18 18:04:30.283907] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:11.748 [2024-11-18 18:04:30.283974] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:11.748 [2024-11-18 18:04:30.284002] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.748 [2024-11-18 18:04:30.345117] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:12.008 18:04:30 -- common/autotest_common.sh@653 -- # es=216 00:06:12.008 18:04:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.008 18:04:30 -- common/autotest_common.sh@662 -- # es=88 00:06:12.008 18:04:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:12.008 18:04:30 -- common/autotest_common.sh@670 -- # es=1 00:06:12.008 18:04:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.008 18:04:30 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:12.008 18:04:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:12.008 18:04:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:12.008 18:04:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.008 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.008 18:04:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.008 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.008 18:04:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.008 18:04:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.008 18:04:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.008 18:04:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.008 18:04:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:12.008 [2024-11-18 18:04:30.508834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.008 [2024-11-18 18:04:30.508965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:06:12.267 [2024-11-18 18:04:30.646751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.267 [2024-11-18 18:04:30.696942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.267 [2024-11-18 18:04:30.740685] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:12.267 [2024-11-18 18:04:30.740757] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:12.268 [2024-11-18 18:04:30.740786] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.268 [2024-11-18 18:04:30.805754] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:12.527 18:04:30 -- common/autotest_common.sh@653 -- # es=216 00:06:12.527 18:04:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.527 18:04:30 -- common/autotest_common.sh@662 -- # es=88 00:06:12.527 18:04:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:12.527 18:04:30 -- common/autotest_common.sh@670 -- # es=1 00:06:12.527 18:04:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.527 18:04:30 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:12.527 18:04:30 -- dd/common.sh@98 -- # xtrace_disable 00:06:12.527 18:04:30 -- common/autotest_common.sh@10 -- # set +x 00:06:12.527 18:04:30 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.527 [2024-11-18 18:04:30.978815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.527 [2024-11-18 18:04:30.978950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58190 ] 00:06:12.527 [2024-11-18 18:04:31.116277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.786 [2024-11-18 18:04:31.168557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.786  [2024-11-18T18:04:31.650Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.046 00:06:13.046 18:04:31 -- dd/posix.sh@49 -- # [[ or64sdig62gelbt5bytg8uy3j8b08grfordew7ieknu432h6kjgcmsvybwhcmct0fmlvm3bugo50ocsvpccinff9vxqf3pyxxtjqmb9nzd4s9ph98u5e7gojbysqd7mznvgbmiv55j7ocsfjajg70hwrrh2455ehad5ogq1glav9trr345sazaycr551rzm3hqemt9qocjjaxn14nnbn542elw5rsgvl7pvfmo63nygn1kjnthkscjc4w4lseih1w6vg6uvt5fwxmh09lshxjkrrctjx5zxkv29v6hz5jrb0sozod5lxobb7kw36kr362wxwj6f3rni5p75sbldm330jxqvhjfdzd62sepvx8maa9oifdr74pmoc23zmyqeyx43xdaco92f7beih12hk9tqb672py1pvri4yu51srzocqlj3pqd178pfshl3t2pgn17gwtl69fabcefszmlug9izhm4bryua41hn0z9wrpjvred22tz0b3nssb9bswhq == \o\r\6\4\s\d\i\g\6\2\g\e\l\b\t\5\b\y\t\g\8\u\y\3\j\8\b\0\8\g\r\f\o\r\d\e\w\7\i\e\k\n\u\4\3\2\h\6\k\j\g\c\m\s\v\y\b\w\h\c\m\c\t\0\f\m\l\v\m\3\b\u\g\o\5\0\o\c\s\v\p\c\c\i\n\f\f\9\v\x\q\f\3\p\y\x\x\t\j\q\m\b\9\n\z\d\4\s\9\p\h\9\8\u\5\e\7\g\o\j\b\y\s\q\d\7\m\z\n\v\g\b\m\i\v\5\5\j\7\o\c\s\f\j\a\j\g\7\0\h\w\r\r\h\2\4\5\5\e\h\a\d\5\o\g\q\1\g\l\a\v\9\t\r\r\3\4\5\s\a\z\a\y\c\r\5\5\1\r\z\m\3\h\q\e\m\t\9\q\o\c\j\j\a\x\n\1\4\n\n\b\n\5\4\2\e\l\w\5\r\s\g\v\l\7\p\v\f\m\o\6\3\n\y\g\n\1\k\j\n\t\h\k\s\c\j\c\4\w\4\l\s\e\i\h\1\w\6\v\g\6\u\v\t\5\f\w\x\m\h\0\9\l\s\h\x\j\k\r\r\c\t\j\x\5\z\x\k\v\2\9\v\6\h\z\5\j\r\b\0\s\o\z\o\d\5\l\x\o\b\b\7\k\w\3\6\k\r\3\6\2\w\x\w\j\6\f\3\r\n\i\5\p\7\5\s\b\l\d\m\3\3\0\j\x\q\v\h\j\f\d\z\d\6\2\s\e\p\v\x\8\m\a\a\9\o\i\f\d\r\7\4\p\m\o\c\2\3\z\m\y\q\e\y\x\4\3\x\d\a\c\o\9\2\f\7\b\e\i\h\1\2\h\k\9\t\q\b\6\7\2\p\y\1\p\v\r\i\4\y\u\5\1\s\r\z\o\c\q\l\j\3\p\q\d\1\7\8\p\f\s\h\l\3\t\2\p\g\n\1\7\g\w\t\l\6\9\f\a\b\c\e\f\s\z\m\l\u\g\9\i\z\h\m\4\b\r\y\u\a\4\1\h\n\0\z\9\w\r\p\j\v\r\e\d\2\2\t\z\0\b\3\n\s\s\b\9\b\s\w\h\q ]] 00:06:13.046 00:06:13.046 real 0m1.408s 00:06:13.046 user 0m0.784s 00:06:13.046 sys 0m0.292s 00:06:13.046 18:04:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.046 18:04:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.046 ************************************ 00:06:13.046 END TEST dd_flag_nofollow 00:06:13.046 ************************************ 00:06:13.046 18:04:31 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:13.046 18:04:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.046 18:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.046 18:04:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.046 ************************************ 00:06:13.046 START TEST dd_flag_noatime 00:06:13.046 ************************************ 00:06:13.046 18:04:31 -- common/autotest_common.sh@1114 -- # noatime 00:06:13.046 18:04:31 -- dd/posix.sh@53 -- # local atime_if 00:06:13.046 18:04:31 -- dd/posix.sh@54 -- # local atime_of 00:06:13.046 18:04:31 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:13.046 18:04:31 -- dd/common.sh@98 -- # xtrace_disable 00:06:13.046 18:04:31 -- common/autotest_common.sh@10 -- # set +x 00:06:13.046 18:04:31 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.046 18:04:31 -- dd/posix.sh@60 -- # atime_if=1731953071 00:06:13.046 18:04:31 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.046 18:04:31 -- dd/posix.sh@61 -- # atime_of=1731953071 00:06:13.046 18:04:31 -- dd/posix.sh@66 -- # sleep 1 00:06:13.984 18:04:32 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.984 [2024-11-18 18:04:32.548669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.984 [2024-11-18 18:04:32.548784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58231 ] 00:06:14.244 [2024-11-18 18:04:32.689435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.244 [2024-11-18 18:04:32.759643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.244  [2024-11-18T18:04:33.108Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.504 00:06:14.504 18:04:33 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.504 18:04:33 -- dd/posix.sh@69 -- # (( atime_if == 1731953071 )) 00:06:14.504 18:04:33 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.504 18:04:33 -- dd/posix.sh@70 -- # (( atime_of == 1731953071 )) 00:06:14.504 18:04:33 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.504 [2024-11-18 18:04:33.076347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.504 [2024-11-18 18:04:33.076465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58242 ] 00:06:14.764 [2024-11-18 18:04:33.216876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.764 [2024-11-18 18:04:33.270581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.764  [2024-11-18T18:04:33.627Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.023 00:06:15.023 18:04:33 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:15.023 18:04:33 -- dd/posix.sh@73 -- # (( atime_if < 1731953073 )) 00:06:15.023 00:06:15.023 real 0m2.049s 00:06:15.023 user 0m0.573s 00:06:15.023 sys 0m0.228s 00:06:15.023 18:04:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.023 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:15.023 ************************************ 00:06:15.023 END TEST dd_flag_noatime 00:06:15.023 ************************************ 00:06:15.023 18:04:33 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:15.023 18:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.023 18:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.023 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:15.023 ************************************ 00:06:15.023 START TEST dd_flags_misc 00:06:15.023 ************************************ 00:06:15.023 18:04:33 -- common/autotest_common.sh@1114 -- # io 00:06:15.023 18:04:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:15.023 18:04:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:15.023 18:04:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:15.023 18:04:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:15.023 18:04:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:15.023 18:04:33 -- dd/common.sh@98 -- # xtrace_disable 00:06:15.023 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:06:15.023 18:04:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.023 18:04:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:15.282 [2024-11-18 18:04:33.640013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.282 [2024-11-18 18:04:33.640127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58276 ] 00:06:15.282 [2024-11-18 18:04:33.778598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.282 [2024-11-18 18:04:33.829143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.282  [2024-11-18T18:04:34.146Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.542 00:06:15.542 18:04:34 -- dd/posix.sh@93 -- # [[ 17neo4nfo8enxu7zo0pr0z5dyos9az688tqtbjwm9vjdt65mrrlbeayygits5ygnramvdap9h44j34u3wz1kltf65bo12ejuf8e24wfcd66lcwb33dz4hj5lrntnz31ezvih5e6bdqlq4ne6etwjwm7uvtuzdqveerlvw5hb1d5g72h14bmu7s2klbvxemurx07nwo9stcxubhjeu9z6e24nnyywg311aveksnwgtrwxn8c1egliboikufbe3soivgt3hedv242podmsi3exlj58csnlvu9w8hifjaotj241qwkj8igsigw9betn0woq94op1dcty0u9tv441607c8dwyaaz6ds626agy3xd30kpy0wqoegcr3fbinyxzamqs8dmdsj6pou4rbin8xdxy5fyrqkcr0lb5xvhzrjkc0c36t5wireqnfsqbx4znq0vjwuqwxo7la0zvs55j46tfg31psbexpvrroar6km2tlod7vb564n2erzmmo74aqu2 == \1\7\n\e\o\4\n\f\o\8\e\n\x\u\7\z\o\0\p\r\0\z\5\d\y\o\s\9\a\z\6\8\8\t\q\t\b\j\w\m\9\v\j\d\t\6\5\m\r\r\l\b\e\a\y\y\g\i\t\s\5\y\g\n\r\a\m\v\d\a\p\9\h\4\4\j\3\4\u\3\w\z\1\k\l\t\f\6\5\b\o\1\2\e\j\u\f\8\e\2\4\w\f\c\d\6\6\l\c\w\b\3\3\d\z\4\h\j\5\l\r\n\t\n\z\3\1\e\z\v\i\h\5\e\6\b\d\q\l\q\4\n\e\6\e\t\w\j\w\m\7\u\v\t\u\z\d\q\v\e\e\r\l\v\w\5\h\b\1\d\5\g\7\2\h\1\4\b\m\u\7\s\2\k\l\b\v\x\e\m\u\r\x\0\7\n\w\o\9\s\t\c\x\u\b\h\j\e\u\9\z\6\e\2\4\n\n\y\y\w\g\3\1\1\a\v\e\k\s\n\w\g\t\r\w\x\n\8\c\1\e\g\l\i\b\o\i\k\u\f\b\e\3\s\o\i\v\g\t\3\h\e\d\v\2\4\2\p\o\d\m\s\i\3\e\x\l\j\5\8\c\s\n\l\v\u\9\w\8\h\i\f\j\a\o\t\j\2\4\1\q\w\k\j\8\i\g\s\i\g\w\9\b\e\t\n\0\w\o\q\9\4\o\p\1\d\c\t\y\0\u\9\t\v\4\4\1\6\0\7\c\8\d\w\y\a\a\z\6\d\s\6\2\6\a\g\y\3\x\d\3\0\k\p\y\0\w\q\o\e\g\c\r\3\f\b\i\n\y\x\z\a\m\q\s\8\d\m\d\s\j\6\p\o\u\4\r\b\i\n\8\x\d\x\y\5\f\y\r\q\k\c\r\0\l\b\5\x\v\h\z\r\j\k\c\0\c\3\6\t\5\w\i\r\e\q\n\f\s\q\b\x\4\z\n\q\0\v\j\w\u\q\w\x\o\7\l\a\0\z\v\s\5\5\j\4\6\t\f\g\3\1\p\s\b\e\x\p\v\r\r\o\a\r\6\k\m\2\t\l\o\d\7\v\b\5\6\4\n\2\e\r\z\m\m\o\7\4\a\q\u\2 ]] 00:06:15.542 18:04:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.542 18:04:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:15.542 [2024-11-18 18:04:34.109987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.542 [2024-11-18 18:04:34.110133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:06:15.801 [2024-11-18 18:04:34.246507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.801 [2024-11-18 18:04:34.294941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.801  [2024-11-18T18:04:34.664Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.060 00:06:16.061 18:04:34 -- dd/posix.sh@93 -- # [[ 17neo4nfo8enxu7zo0pr0z5dyos9az688tqtbjwm9vjdt65mrrlbeayygits5ygnramvdap9h44j34u3wz1kltf65bo12ejuf8e24wfcd66lcwb33dz4hj5lrntnz31ezvih5e6bdqlq4ne6etwjwm7uvtuzdqveerlvw5hb1d5g72h14bmu7s2klbvxemurx07nwo9stcxubhjeu9z6e24nnyywg311aveksnwgtrwxn8c1egliboikufbe3soivgt3hedv242podmsi3exlj58csnlvu9w8hifjaotj241qwkj8igsigw9betn0woq94op1dcty0u9tv441607c8dwyaaz6ds626agy3xd30kpy0wqoegcr3fbinyxzamqs8dmdsj6pou4rbin8xdxy5fyrqkcr0lb5xvhzrjkc0c36t5wireqnfsqbx4znq0vjwuqwxo7la0zvs55j46tfg31psbexpvrroar6km2tlod7vb564n2erzmmo74aqu2 == \1\7\n\e\o\4\n\f\o\8\e\n\x\u\7\z\o\0\p\r\0\z\5\d\y\o\s\9\a\z\6\8\8\t\q\t\b\j\w\m\9\v\j\d\t\6\5\m\r\r\l\b\e\a\y\y\g\i\t\s\5\y\g\n\r\a\m\v\d\a\p\9\h\4\4\j\3\4\u\3\w\z\1\k\l\t\f\6\5\b\o\1\2\e\j\u\f\8\e\2\4\w\f\c\d\6\6\l\c\w\b\3\3\d\z\4\h\j\5\l\r\n\t\n\z\3\1\e\z\v\i\h\5\e\6\b\d\q\l\q\4\n\e\6\e\t\w\j\w\m\7\u\v\t\u\z\d\q\v\e\e\r\l\v\w\5\h\b\1\d\5\g\7\2\h\1\4\b\m\u\7\s\2\k\l\b\v\x\e\m\u\r\x\0\7\n\w\o\9\s\t\c\x\u\b\h\j\e\u\9\z\6\e\2\4\n\n\y\y\w\g\3\1\1\a\v\e\k\s\n\w\g\t\r\w\x\n\8\c\1\e\g\l\i\b\o\i\k\u\f\b\e\3\s\o\i\v\g\t\3\h\e\d\v\2\4\2\p\o\d\m\s\i\3\e\x\l\j\5\8\c\s\n\l\v\u\9\w\8\h\i\f\j\a\o\t\j\2\4\1\q\w\k\j\8\i\g\s\i\g\w\9\b\e\t\n\0\w\o\q\9\4\o\p\1\d\c\t\y\0\u\9\t\v\4\4\1\6\0\7\c\8\d\w\y\a\a\z\6\d\s\6\2\6\a\g\y\3\x\d\3\0\k\p\y\0\w\q\o\e\g\c\r\3\f\b\i\n\y\x\z\a\m\q\s\8\d\m\d\s\j\6\p\o\u\4\r\b\i\n\8\x\d\x\y\5\f\y\r\q\k\c\r\0\l\b\5\x\v\h\z\r\j\k\c\0\c\3\6\t\5\w\i\r\e\q\n\f\s\q\b\x\4\z\n\q\0\v\j\w\u\q\w\x\o\7\l\a\0\z\v\s\5\5\j\4\6\t\f\g\3\1\p\s\b\e\x\p\v\r\r\o\a\r\6\k\m\2\t\l\o\d\7\v\b\5\6\4\n\2\e\r\z\m\m\o\7\4\a\q\u\2 ]] 00:06:16.061 18:04:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.061 18:04:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:16.061 [2024-11-18 18:04:34.600659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.061 [2024-11-18 18:04:34.600786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58287 ] 00:06:16.319 [2024-11-18 18:04:34.741813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.319 [2024-11-18 18:04:34.810917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.319  [2024-11-18T18:04:35.181Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.577 00:06:16.578 18:04:35 -- dd/posix.sh@93 -- # [[ 17neo4nfo8enxu7zo0pr0z5dyos9az688tqtbjwm9vjdt65mrrlbeayygits5ygnramvdap9h44j34u3wz1kltf65bo12ejuf8e24wfcd66lcwb33dz4hj5lrntnz31ezvih5e6bdqlq4ne6etwjwm7uvtuzdqveerlvw5hb1d5g72h14bmu7s2klbvxemurx07nwo9stcxubhjeu9z6e24nnyywg311aveksnwgtrwxn8c1egliboikufbe3soivgt3hedv242podmsi3exlj58csnlvu9w8hifjaotj241qwkj8igsigw9betn0woq94op1dcty0u9tv441607c8dwyaaz6ds626agy3xd30kpy0wqoegcr3fbinyxzamqs8dmdsj6pou4rbin8xdxy5fyrqkcr0lb5xvhzrjkc0c36t5wireqnfsqbx4znq0vjwuqwxo7la0zvs55j46tfg31psbexpvrroar6km2tlod7vb564n2erzmmo74aqu2 == \1\7\n\e\o\4\n\f\o\8\e\n\x\u\7\z\o\0\p\r\0\z\5\d\y\o\s\9\a\z\6\8\8\t\q\t\b\j\w\m\9\v\j\d\t\6\5\m\r\r\l\b\e\a\y\y\g\i\t\s\5\y\g\n\r\a\m\v\d\a\p\9\h\4\4\j\3\4\u\3\w\z\1\k\l\t\f\6\5\b\o\1\2\e\j\u\f\8\e\2\4\w\f\c\d\6\6\l\c\w\b\3\3\d\z\4\h\j\5\l\r\n\t\n\z\3\1\e\z\v\i\h\5\e\6\b\d\q\l\q\4\n\e\6\e\t\w\j\w\m\7\u\v\t\u\z\d\q\v\e\e\r\l\v\w\5\h\b\1\d\5\g\7\2\h\1\4\b\m\u\7\s\2\k\l\b\v\x\e\m\u\r\x\0\7\n\w\o\9\s\t\c\x\u\b\h\j\e\u\9\z\6\e\2\4\n\n\y\y\w\g\3\1\1\a\v\e\k\s\n\w\g\t\r\w\x\n\8\c\1\e\g\l\i\b\o\i\k\u\f\b\e\3\s\o\i\v\g\t\3\h\e\d\v\2\4\2\p\o\d\m\s\i\3\e\x\l\j\5\8\c\s\n\l\v\u\9\w\8\h\i\f\j\a\o\t\j\2\4\1\q\w\k\j\8\i\g\s\i\g\w\9\b\e\t\n\0\w\o\q\9\4\o\p\1\d\c\t\y\0\u\9\t\v\4\4\1\6\0\7\c\8\d\w\y\a\a\z\6\d\s\6\2\6\a\g\y\3\x\d\3\0\k\p\y\0\w\q\o\e\g\c\r\3\f\b\i\n\y\x\z\a\m\q\s\8\d\m\d\s\j\6\p\o\u\4\r\b\i\n\8\x\d\x\y\5\f\y\r\q\k\c\r\0\l\b\5\x\v\h\z\r\j\k\c\0\c\3\6\t\5\w\i\r\e\q\n\f\s\q\b\x\4\z\n\q\0\v\j\w\u\q\w\x\o\7\l\a\0\z\v\s\5\5\j\4\6\t\f\g\3\1\p\s\b\e\x\p\v\r\r\o\a\r\6\k\m\2\t\l\o\d\7\v\b\5\6\4\n\2\e\r\z\m\m\o\7\4\a\q\u\2 ]] 00:06:16.578 18:04:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.578 18:04:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:16.578 [2024-11-18 18:04:35.120075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.578 [2024-11-18 18:04:35.120223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58294 ] 00:06:16.837 [2024-11-18 18:04:35.259056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.837 [2024-11-18 18:04:35.309741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.837  [2024-11-18T18:04:35.701Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.097 00:06:17.097 18:04:35 -- dd/posix.sh@93 -- # [[ 17neo4nfo8enxu7zo0pr0z5dyos9az688tqtbjwm9vjdt65mrrlbeayygits5ygnramvdap9h44j34u3wz1kltf65bo12ejuf8e24wfcd66lcwb33dz4hj5lrntnz31ezvih5e6bdqlq4ne6etwjwm7uvtuzdqveerlvw5hb1d5g72h14bmu7s2klbvxemurx07nwo9stcxubhjeu9z6e24nnyywg311aveksnwgtrwxn8c1egliboikufbe3soivgt3hedv242podmsi3exlj58csnlvu9w8hifjaotj241qwkj8igsigw9betn0woq94op1dcty0u9tv441607c8dwyaaz6ds626agy3xd30kpy0wqoegcr3fbinyxzamqs8dmdsj6pou4rbin8xdxy5fyrqkcr0lb5xvhzrjkc0c36t5wireqnfsqbx4znq0vjwuqwxo7la0zvs55j46tfg31psbexpvrroar6km2tlod7vb564n2erzmmo74aqu2 == \1\7\n\e\o\4\n\f\o\8\e\n\x\u\7\z\o\0\p\r\0\z\5\d\y\o\s\9\a\z\6\8\8\t\q\t\b\j\w\m\9\v\j\d\t\6\5\m\r\r\l\b\e\a\y\y\g\i\t\s\5\y\g\n\r\a\m\v\d\a\p\9\h\4\4\j\3\4\u\3\w\z\1\k\l\t\f\6\5\b\o\1\2\e\j\u\f\8\e\2\4\w\f\c\d\6\6\l\c\w\b\3\3\d\z\4\h\j\5\l\r\n\t\n\z\3\1\e\z\v\i\h\5\e\6\b\d\q\l\q\4\n\e\6\e\t\w\j\w\m\7\u\v\t\u\z\d\q\v\e\e\r\l\v\w\5\h\b\1\d\5\g\7\2\h\1\4\b\m\u\7\s\2\k\l\b\v\x\e\m\u\r\x\0\7\n\w\o\9\s\t\c\x\u\b\h\j\e\u\9\z\6\e\2\4\n\n\y\y\w\g\3\1\1\a\v\e\k\s\n\w\g\t\r\w\x\n\8\c\1\e\g\l\i\b\o\i\k\u\f\b\e\3\s\o\i\v\g\t\3\h\e\d\v\2\4\2\p\o\d\m\s\i\3\e\x\l\j\5\8\c\s\n\l\v\u\9\w\8\h\i\f\j\a\o\t\j\2\4\1\q\w\k\j\8\i\g\s\i\g\w\9\b\e\t\n\0\w\o\q\9\4\o\p\1\d\c\t\y\0\u\9\t\v\4\4\1\6\0\7\c\8\d\w\y\a\a\z\6\d\s\6\2\6\a\g\y\3\x\d\3\0\k\p\y\0\w\q\o\e\g\c\r\3\f\b\i\n\y\x\z\a\m\q\s\8\d\m\d\s\j\6\p\o\u\4\r\b\i\n\8\x\d\x\y\5\f\y\r\q\k\c\r\0\l\b\5\x\v\h\z\r\j\k\c\0\c\3\6\t\5\w\i\r\e\q\n\f\s\q\b\x\4\z\n\q\0\v\j\w\u\q\w\x\o\7\l\a\0\z\v\s\5\5\j\4\6\t\f\g\3\1\p\s\b\e\x\p\v\r\r\o\a\r\6\k\m\2\t\l\o\d\7\v\b\5\6\4\n\2\e\r\z\m\m\o\7\4\a\q\u\2 ]] 00:06:17.097 18:04:35 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:17.097 18:04:35 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:17.097 18:04:35 -- dd/common.sh@98 -- # xtrace_disable 00:06:17.097 18:04:35 -- common/autotest_common.sh@10 -- # set +x 00:06:17.097 18:04:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.097 18:04:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:17.097 [2024-11-18 18:04:35.612967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.097 [2024-11-18 18:04:35.613115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58302 ] 00:06:17.357 [2024-11-18 18:04:35.751748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.357 [2024-11-18 18:04:35.801343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.357  [2024-11-18T18:04:36.222Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.618 00:06:17.619 18:04:36 -- dd/posix.sh@93 -- # [[ 5zs4qwz13pwcchz95sv3vw7xpmqgfe2zrtjn7tw7kqhoe00k7qrom2yz2irw2lp5opnjfg7vvexph91n7yawlthk5jeo6fh85ffguzughrbmrz439eujuaj4x9o8b1onx07yh0geifpncjkbnqyhij7tsvehpml6ls9shb1qn4nhqvgvy1gxpvh5wibcpfm1mv1b964cnpympuubgcyzeti7tx7m1ugnj1iqz0ygzp5cc0lzcdiubty92urinsrwd37bjolc9w5i9xfapoqr75q2rfbq6eu9sdrq21miwxj59yrjqc4u6879tql6zi4sv06vec9nuotzvqmugq38xdp8qaz84qcszds3y2q2pgmqzaed89bvpv1c2xy6xxar1g0xrz1o8gyhsd9t1vzmdonfa4tudfzdjwgpmewbv8xpgh5hnoxrlb9sbhbwdhlwl7546qpa5zsaza9765t6tzhh7t7g1aokdv19ciaadtgb1k85kx3gcf2zulb6jsc4 == \5\z\s\4\q\w\z\1\3\p\w\c\c\h\z\9\5\s\v\3\v\w\7\x\p\m\q\g\f\e\2\z\r\t\j\n\7\t\w\7\k\q\h\o\e\0\0\k\7\q\r\o\m\2\y\z\2\i\r\w\2\l\p\5\o\p\n\j\f\g\7\v\v\e\x\p\h\9\1\n\7\y\a\w\l\t\h\k\5\j\e\o\6\f\h\8\5\f\f\g\u\z\u\g\h\r\b\m\r\z\4\3\9\e\u\j\u\a\j\4\x\9\o\8\b\1\o\n\x\0\7\y\h\0\g\e\i\f\p\n\c\j\k\b\n\q\y\h\i\j\7\t\s\v\e\h\p\m\l\6\l\s\9\s\h\b\1\q\n\4\n\h\q\v\g\v\y\1\g\x\p\v\h\5\w\i\b\c\p\f\m\1\m\v\1\b\9\6\4\c\n\p\y\m\p\u\u\b\g\c\y\z\e\t\i\7\t\x\7\m\1\u\g\n\j\1\i\q\z\0\y\g\z\p\5\c\c\0\l\z\c\d\i\u\b\t\y\9\2\u\r\i\n\s\r\w\d\3\7\b\j\o\l\c\9\w\5\i\9\x\f\a\p\o\q\r\7\5\q\2\r\f\b\q\6\e\u\9\s\d\r\q\2\1\m\i\w\x\j\5\9\y\r\j\q\c\4\u\6\8\7\9\t\q\l\6\z\i\4\s\v\0\6\v\e\c\9\n\u\o\t\z\v\q\m\u\g\q\3\8\x\d\p\8\q\a\z\8\4\q\c\s\z\d\s\3\y\2\q\2\p\g\m\q\z\a\e\d\8\9\b\v\p\v\1\c\2\x\y\6\x\x\a\r\1\g\0\x\r\z\1\o\8\g\y\h\s\d\9\t\1\v\z\m\d\o\n\f\a\4\t\u\d\f\z\d\j\w\g\p\m\e\w\b\v\8\x\p\g\h\5\h\n\o\x\r\l\b\9\s\b\h\b\w\d\h\l\w\l\7\5\4\6\q\p\a\5\z\s\a\z\a\9\7\6\5\t\6\t\z\h\h\7\t\7\g\1\a\o\k\d\v\1\9\c\i\a\a\d\t\g\b\1\k\8\5\k\x\3\g\c\f\2\z\u\l\b\6\j\s\c\4 ]] 00:06:17.619 18:04:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.619 18:04:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:17.619 [2024-11-18 18:04:36.086453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.619 [2024-11-18 18:04:36.086608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58309 ] 00:06:17.878 [2024-11-18 18:04:36.221309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.878 [2024-11-18 18:04:36.271994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.878  [2024-11-18T18:04:36.742Z] Copying: 512/512 [B] (average 500 kBps) 00:06:18.138 00:06:18.138 18:04:36 -- dd/posix.sh@93 -- # [[ 5zs4qwz13pwcchz95sv3vw7xpmqgfe2zrtjn7tw7kqhoe00k7qrom2yz2irw2lp5opnjfg7vvexph91n7yawlthk5jeo6fh85ffguzughrbmrz439eujuaj4x9o8b1onx07yh0geifpncjkbnqyhij7tsvehpml6ls9shb1qn4nhqvgvy1gxpvh5wibcpfm1mv1b964cnpympuubgcyzeti7tx7m1ugnj1iqz0ygzp5cc0lzcdiubty92urinsrwd37bjolc9w5i9xfapoqr75q2rfbq6eu9sdrq21miwxj59yrjqc4u6879tql6zi4sv06vec9nuotzvqmugq38xdp8qaz84qcszds3y2q2pgmqzaed89bvpv1c2xy6xxar1g0xrz1o8gyhsd9t1vzmdonfa4tudfzdjwgpmewbv8xpgh5hnoxrlb9sbhbwdhlwl7546qpa5zsaza9765t6tzhh7t7g1aokdv19ciaadtgb1k85kx3gcf2zulb6jsc4 == \5\z\s\4\q\w\z\1\3\p\w\c\c\h\z\9\5\s\v\3\v\w\7\x\p\m\q\g\f\e\2\z\r\t\j\n\7\t\w\7\k\q\h\o\e\0\0\k\7\q\r\o\m\2\y\z\2\i\r\w\2\l\p\5\o\p\n\j\f\g\7\v\v\e\x\p\h\9\1\n\7\y\a\w\l\t\h\k\5\j\e\o\6\f\h\8\5\f\f\g\u\z\u\g\h\r\b\m\r\z\4\3\9\e\u\j\u\a\j\4\x\9\o\8\b\1\o\n\x\0\7\y\h\0\g\e\i\f\p\n\c\j\k\b\n\q\y\h\i\j\7\t\s\v\e\h\p\m\l\6\l\s\9\s\h\b\1\q\n\4\n\h\q\v\g\v\y\1\g\x\p\v\h\5\w\i\b\c\p\f\m\1\m\v\1\b\9\6\4\c\n\p\y\m\p\u\u\b\g\c\y\z\e\t\i\7\t\x\7\m\1\u\g\n\j\1\i\q\z\0\y\g\z\p\5\c\c\0\l\z\c\d\i\u\b\t\y\9\2\u\r\i\n\s\r\w\d\3\7\b\j\o\l\c\9\w\5\i\9\x\f\a\p\o\q\r\7\5\q\2\r\f\b\q\6\e\u\9\s\d\r\q\2\1\m\i\w\x\j\5\9\y\r\j\q\c\4\u\6\8\7\9\t\q\l\6\z\i\4\s\v\0\6\v\e\c\9\n\u\o\t\z\v\q\m\u\g\q\3\8\x\d\p\8\q\a\z\8\4\q\c\s\z\d\s\3\y\2\q\2\p\g\m\q\z\a\e\d\8\9\b\v\p\v\1\c\2\x\y\6\x\x\a\r\1\g\0\x\r\z\1\o\8\g\y\h\s\d\9\t\1\v\z\m\d\o\n\f\a\4\t\u\d\f\z\d\j\w\g\p\m\e\w\b\v\8\x\p\g\h\5\h\n\o\x\r\l\b\9\s\b\h\b\w\d\h\l\w\l\7\5\4\6\q\p\a\5\z\s\a\z\a\9\7\6\5\t\6\t\z\h\h\7\t\7\g\1\a\o\k\d\v\1\9\c\i\a\a\d\t\g\b\1\k\8\5\k\x\3\g\c\f\2\z\u\l\b\6\j\s\c\4 ]] 00:06:18.138 18:04:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:18.138 18:04:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:18.138 [2024-11-18 18:04:36.543393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.138 [2024-11-18 18:04:36.543516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58319 ] 00:06:18.138 [2024-11-18 18:04:36.681858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.138 [2024-11-18 18:04:36.729004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.397  [2024-11-18T18:04:37.001Z] Copying: 512/512 [B] (average 250 kBps) 00:06:18.397 00:06:18.397 18:04:36 -- dd/posix.sh@93 -- # [[ 5zs4qwz13pwcchz95sv3vw7xpmqgfe2zrtjn7tw7kqhoe00k7qrom2yz2irw2lp5opnjfg7vvexph91n7yawlthk5jeo6fh85ffguzughrbmrz439eujuaj4x9o8b1onx07yh0geifpncjkbnqyhij7tsvehpml6ls9shb1qn4nhqvgvy1gxpvh5wibcpfm1mv1b964cnpympuubgcyzeti7tx7m1ugnj1iqz0ygzp5cc0lzcdiubty92urinsrwd37bjolc9w5i9xfapoqr75q2rfbq6eu9sdrq21miwxj59yrjqc4u6879tql6zi4sv06vec9nuotzvqmugq38xdp8qaz84qcszds3y2q2pgmqzaed89bvpv1c2xy6xxar1g0xrz1o8gyhsd9t1vzmdonfa4tudfzdjwgpmewbv8xpgh5hnoxrlb9sbhbwdhlwl7546qpa5zsaza9765t6tzhh7t7g1aokdv19ciaadtgb1k85kx3gcf2zulb6jsc4 == \5\z\s\4\q\w\z\1\3\p\w\c\c\h\z\9\5\s\v\3\v\w\7\x\p\m\q\g\f\e\2\z\r\t\j\n\7\t\w\7\k\q\h\o\e\0\0\k\7\q\r\o\m\2\y\z\2\i\r\w\2\l\p\5\o\p\n\j\f\g\7\v\v\e\x\p\h\9\1\n\7\y\a\w\l\t\h\k\5\j\e\o\6\f\h\8\5\f\f\g\u\z\u\g\h\r\b\m\r\z\4\3\9\e\u\j\u\a\j\4\x\9\o\8\b\1\o\n\x\0\7\y\h\0\g\e\i\f\p\n\c\j\k\b\n\q\y\h\i\j\7\t\s\v\e\h\p\m\l\6\l\s\9\s\h\b\1\q\n\4\n\h\q\v\g\v\y\1\g\x\p\v\h\5\w\i\b\c\p\f\m\1\m\v\1\b\9\6\4\c\n\p\y\m\p\u\u\b\g\c\y\z\e\t\i\7\t\x\7\m\1\u\g\n\j\1\i\q\z\0\y\g\z\p\5\c\c\0\l\z\c\d\i\u\b\t\y\9\2\u\r\i\n\s\r\w\d\3\7\b\j\o\l\c\9\w\5\i\9\x\f\a\p\o\q\r\7\5\q\2\r\f\b\q\6\e\u\9\s\d\r\q\2\1\m\i\w\x\j\5\9\y\r\j\q\c\4\u\6\8\7\9\t\q\l\6\z\i\4\s\v\0\6\v\e\c\9\n\u\o\t\z\v\q\m\u\g\q\3\8\x\d\p\8\q\a\z\8\4\q\c\s\z\d\s\3\y\2\q\2\p\g\m\q\z\a\e\d\8\9\b\v\p\v\1\c\2\x\y\6\x\x\a\r\1\g\0\x\r\z\1\o\8\g\y\h\s\d\9\t\1\v\z\m\d\o\n\f\a\4\t\u\d\f\z\d\j\w\g\p\m\e\w\b\v\8\x\p\g\h\5\h\n\o\x\r\l\b\9\s\b\h\b\w\d\h\l\w\l\7\5\4\6\q\p\a\5\z\s\a\z\a\9\7\6\5\t\6\t\z\h\h\7\t\7\g\1\a\o\k\d\v\1\9\c\i\a\a\d\t\g\b\1\k\8\5\k\x\3\g\c\f\2\z\u\l\b\6\j\s\c\4 ]] 00:06:18.397 18:04:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:18.397 18:04:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:18.657 [2024-11-18 18:04:37.003821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.657 [2024-11-18 18:04:37.003969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58326 ] 00:06:18.657 [2024-11-18 18:04:37.133485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.657 [2024-11-18 18:04:37.183351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.657  [2024-11-18T18:04:37.520Z] Copying: 512/512 [B] (average 250 kBps) 00:06:18.916 00:06:18.916 18:04:37 -- dd/posix.sh@93 -- # [[ 5zs4qwz13pwcchz95sv3vw7xpmqgfe2zrtjn7tw7kqhoe00k7qrom2yz2irw2lp5opnjfg7vvexph91n7yawlthk5jeo6fh85ffguzughrbmrz439eujuaj4x9o8b1onx07yh0geifpncjkbnqyhij7tsvehpml6ls9shb1qn4nhqvgvy1gxpvh5wibcpfm1mv1b964cnpympuubgcyzeti7tx7m1ugnj1iqz0ygzp5cc0lzcdiubty92urinsrwd37bjolc9w5i9xfapoqr75q2rfbq6eu9sdrq21miwxj59yrjqc4u6879tql6zi4sv06vec9nuotzvqmugq38xdp8qaz84qcszds3y2q2pgmqzaed89bvpv1c2xy6xxar1g0xrz1o8gyhsd9t1vzmdonfa4tudfzdjwgpmewbv8xpgh5hnoxrlb9sbhbwdhlwl7546qpa5zsaza9765t6tzhh7t7g1aokdv19ciaadtgb1k85kx3gcf2zulb6jsc4 == \5\z\s\4\q\w\z\1\3\p\w\c\c\h\z\9\5\s\v\3\v\w\7\x\p\m\q\g\f\e\2\z\r\t\j\n\7\t\w\7\k\q\h\o\e\0\0\k\7\q\r\o\m\2\y\z\2\i\r\w\2\l\p\5\o\p\n\j\f\g\7\v\v\e\x\p\h\9\1\n\7\y\a\w\l\t\h\k\5\j\e\o\6\f\h\8\5\f\f\g\u\z\u\g\h\r\b\m\r\z\4\3\9\e\u\j\u\a\j\4\x\9\o\8\b\1\o\n\x\0\7\y\h\0\g\e\i\f\p\n\c\j\k\b\n\q\y\h\i\j\7\t\s\v\e\h\p\m\l\6\l\s\9\s\h\b\1\q\n\4\n\h\q\v\g\v\y\1\g\x\p\v\h\5\w\i\b\c\p\f\m\1\m\v\1\b\9\6\4\c\n\p\y\m\p\u\u\b\g\c\y\z\e\t\i\7\t\x\7\m\1\u\g\n\j\1\i\q\z\0\y\g\z\p\5\c\c\0\l\z\c\d\i\u\b\t\y\9\2\u\r\i\n\s\r\w\d\3\7\b\j\o\l\c\9\w\5\i\9\x\f\a\p\o\q\r\7\5\q\2\r\f\b\q\6\e\u\9\s\d\r\q\2\1\m\i\w\x\j\5\9\y\r\j\q\c\4\u\6\8\7\9\t\q\l\6\z\i\4\s\v\0\6\v\e\c\9\n\u\o\t\z\v\q\m\u\g\q\3\8\x\d\p\8\q\a\z\8\4\q\c\s\z\d\s\3\y\2\q\2\p\g\m\q\z\a\e\d\8\9\b\v\p\v\1\c\2\x\y\6\x\x\a\r\1\g\0\x\r\z\1\o\8\g\y\h\s\d\9\t\1\v\z\m\d\o\n\f\a\4\t\u\d\f\z\d\j\w\g\p\m\e\w\b\v\8\x\p\g\h\5\h\n\o\x\r\l\b\9\s\b\h\b\w\d\h\l\w\l\7\5\4\6\q\p\a\5\z\s\a\z\a\9\7\6\5\t\6\t\z\h\h\7\t\7\g\1\a\o\k\d\v\1\9\c\i\a\a\d\t\g\b\1\k\8\5\k\x\3\g\c\f\2\z\u\l\b\6\j\s\c\4 ]] 00:06:18.916 00:06:18.916 real 0m3.849s 00:06:18.916 user 0m2.100s 00:06:18.916 sys 0m0.766s 00:06:18.916 18:04:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.916 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.916 ************************************ 00:06:18.916 END TEST dd_flags_misc 00:06:18.916 ************************************ 00:06:18.916 18:04:37 -- dd/posix.sh@131 -- # tests_forced_aio 00:06:18.916 18:04:37 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:18.916 * Second test run, disabling liburing, forcing AIO 00:06:18.916 18:04:37 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:18.916 18:04:37 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:18.916 18:04:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.916 18:04:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.916 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.916 ************************************ 00:06:18.916 START TEST dd_flag_append_forced_aio 00:06:18.916 ************************************ 00:06:18.916 18:04:37 -- common/autotest_common.sh@1114 -- # append 00:06:18.916 18:04:37 -- dd/posix.sh@16 -- # local dump0 00:06:18.916 18:04:37 -- dd/posix.sh@17 -- # local dump1 00:06:18.917 18:04:37 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:18.917 18:04:37 -- dd/common.sh@98 -- # xtrace_disable 00:06:18.917 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.917 18:04:37 -- dd/posix.sh@19 -- # dump0=thowcb5v45722pxvvymxfqq7qyqifphc 00:06:18.917 18:04:37 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:18.917 18:04:37 -- dd/common.sh@98 -- # xtrace_disable 00:06:18.917 18:04:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.917 18:04:37 -- dd/posix.sh@20 -- # dump1=50b8hqau3lr120jc15bltjtnxiut4v59 00:06:18.917 18:04:37 -- dd/posix.sh@22 -- # printf %s thowcb5v45722pxvvymxfqq7qyqifphc 00:06:18.917 18:04:37 -- dd/posix.sh@23 -- # printf %s 50b8hqau3lr120jc15bltjtnxiut4v59 00:06:18.917 18:04:37 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:19.175 [2024-11-18 18:04:37.550018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.175 [2024-11-18 18:04:37.550207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:06:19.175 [2024-11-18 18:04:37.692823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.175 [2024-11-18 18:04:37.761884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.434  [2024-11-18T18:04:38.038Z] Copying: 32/32 [B] (average 31 kBps) 00:06:19.434 00:06:19.434 18:04:38 -- dd/posix.sh@27 -- # [[ 50b8hqau3lr120jc15bltjtnxiut4v59thowcb5v45722pxvvymxfqq7qyqifphc == \5\0\b\8\h\q\a\u\3\l\r\1\2\0\j\c\1\5\b\l\t\j\t\n\x\i\u\t\4\v\5\9\t\h\o\w\c\b\5\v\4\5\7\2\2\p\x\v\v\y\m\x\f\q\q\7\q\y\q\i\f\p\h\c ]] 00:06:19.434 00:06:19.434 real 0m0.533s 00:06:19.434 user 0m0.289s 00:06:19.434 sys 0m0.117s 00:06:19.434 18:04:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.434 ************************************ 00:06:19.434 END TEST dd_flag_append_forced_aio 00:06:19.434 ************************************ 00:06:19.434 18:04:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.694 18:04:38 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:19.694 18:04:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.694 18:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.694 18:04:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.694 ************************************ 00:06:19.694 START TEST dd_flag_directory_forced_aio 00:06:19.694 ************************************ 00:06:19.694 18:04:38 -- common/autotest_common.sh@1114 -- # directory 00:06:19.694 18:04:38 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.694 18:04:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.694 18:04:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.694 18:04:38 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.694 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.694 18:04:38 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.694 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.694 18:04:38 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.694 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.694 18:04:38 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.694 18:04:38 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.694 18:04:38 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.694 [2024-11-18 18:04:38.128125] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.694 [2024-11-18 18:04:38.128249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58379 ] 00:06:19.694 [2024-11-18 18:04:38.264919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.953 [2024-11-18 18:04:38.314952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.953 [2024-11-18 18:04:38.356894] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.953 [2024-11-18 18:04:38.356959] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.953 [2024-11-18 18:04:38.356971] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.953 [2024-11-18 18:04:38.420525] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:19.953 18:04:38 -- common/autotest_common.sh@653 -- # es=236 00:06:19.953 18:04:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.953 18:04:38 -- common/autotest_common.sh@662 -- # es=108 00:06:19.953 18:04:38 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.953 18:04:38 -- common/autotest_common.sh@670 -- # es=1 00:06:19.953 18:04:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.953 18:04:38 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.953 18:04:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.953 18:04:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.953 18:04:38 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.953 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.953 18:04:38 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.953 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.953 18:04:38 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.953 18:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.953 18:04:38 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.953 18:04:38 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.953 18:04:38 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:20.213 [2024-11-18 18:04:38.585718] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.213 [2024-11-18 18:04:38.585815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58389 ] 00:06:20.213 [2024-11-18 18:04:38.723952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.213 [2024-11-18 18:04:38.771847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.213 [2024-11-18 18:04:38.814189] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:20.213 [2024-11-18 18:04:38.814263] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:20.213 [2024-11-18 18:04:38.814275] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.478 [2024-11-18 18:04:38.875819] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:20.478 18:04:38 -- common/autotest_common.sh@653 -- # es=236 00:06:20.478 18:04:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.478 18:04:38 -- common/autotest_common.sh@662 -- # es=108 00:06:20.478 18:04:38 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.478 18:04:38 -- common/autotest_common.sh@670 -- # es=1 00:06:20.478 18:04:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.478 00:06:20.478 real 0m0.900s 00:06:20.478 user 0m0.506s 00:06:20.478 sys 0m0.184s 00:06:20.478 18:04:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.478 ************************************ 00:06:20.478 END TEST dd_flag_directory_forced_aio 00:06:20.478 18:04:38 -- common/autotest_common.sh@10 -- # set +x 00:06:20.478 ************************************ 00:06:20.478 18:04:39 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:20.478 18:04:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.478 18:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.478 18:04:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.478 ************************************ 00:06:20.478 START TEST dd_flag_nofollow_forced_aio 00:06:20.478 ************************************ 00:06:20.478 18:04:39 -- common/autotest_common.sh@1114 -- # nofollow 00:06:20.478 18:04:39 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.478 18:04:39 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.478 18:04:39 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.478 18:04:39 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.478 18:04:39 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.478 18:04:39 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.478 18:04:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.478 18:04:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.478 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.478 18:04:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.478 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.478 18:04:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.478 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.478 18:04:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.478 18:04:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:20.478 18:04:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.745 [2024-11-18 18:04:39.093348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.745 [2024-11-18 18:04:39.093472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58417 ] 00:06:20.745 [2024-11-18 18:04:39.230536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.745 [2024-11-18 18:04:39.283311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.745 [2024-11-18 18:04:39.328453] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:20.745 [2024-11-18 18:04:39.328526] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:20.745 [2024-11-18 18:04:39.328567] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.005 [2024-11-18 18:04:39.393093] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:21.005 18:04:39 -- common/autotest_common.sh@653 -- # es=216 00:06:21.005 18:04:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.005 18:04:39 -- common/autotest_common.sh@662 -- # es=88 00:06:21.005 18:04:39 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.005 18:04:39 -- common/autotest_common.sh@670 -- # es=1 00:06:21.005 18:04:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.005 18:04:39 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.005 18:04:39 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.005 18:04:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.005 18:04:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.005 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.005 18:04:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.005 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.005 18:04:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.005 18:04:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.005 18:04:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.005 18:04:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.005 18:04:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:21.005 [2024-11-18 18:04:39.549249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.005 [2024-11-18 18:04:39.549346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58427 ] 00:06:21.264 [2024-11-18 18:04:39.685427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.264 [2024-11-18 18:04:39.738282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.264 [2024-11-18 18:04:39.782230] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:21.264 [2024-11-18 18:04:39.782297] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:21.264 [2024-11-18 18:04:39.782325] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.264 [2024-11-18 18:04:39.840628] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:21.523 18:04:39 -- common/autotest_common.sh@653 -- # es=216 00:06:21.523 18:04:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.523 18:04:39 -- common/autotest_common.sh@662 -- # es=88 00:06:21.523 18:04:39 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.523 18:04:39 -- common/autotest_common.sh@670 -- # es=1 00:06:21.523 18:04:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.523 18:04:39 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:21.523 18:04:39 -- dd/common.sh@98 -- # xtrace_disable 00:06:21.523 18:04:39 -- common/autotest_common.sh@10 -- # set +x 00:06:21.523 18:04:39 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.523 [2024-11-18 18:04:40.014136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.523 [2024-11-18 18:04:40.014231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58434 ] 00:06:21.782 [2024-11-18 18:04:40.150766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.782 [2024-11-18 18:04:40.205253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.782  [2024-11-18T18:04:40.645Z] Copying: 512/512 [B] (average 500 kBps) 00:06:22.041 00:06:22.041 18:04:40 -- dd/posix.sh@49 -- # [[ kaqkf2pd5lzvs1itdxdiybhfnx84u9dearztzypotzdwvncm8c1apprrna8nmh9e43bhar0vp78w0flrwd4d2i1h1bbcivlfo0z1xv4wu52zq7jtinp85anb4xand7tsq29tyz8dkzoi2fxzzbqp9j3fgs2g0km28yuk6zq8ll7ac5h1zgtngw5zukr71c89xff5brdwyrcmhchxrhyz4rep8n136wrq2hnn33mae0w9rivel8vsx4y7v6v585bmm2w8tvn644rd24hl32lzi5gj8er92z0kbnft9wi6d707gzu44xnmpq2sd9upnl12cgmdode8fevteghnenvklmdk7d5y2orcvd61w45ua566jywnstb3z3k1tku11hseczvf0kt0d2mmpl3hkh15ve37fsp683zd0ngzgixwvjoszd9b4poh1ttb55ehnkrwmh2iyzs9sjee81hagpbwqqq7pn6kwq1n61cl3r2cvflac8ldndvttlmm2w4jp69c == \k\a\q\k\f\2\p\d\5\l\z\v\s\1\i\t\d\x\d\i\y\b\h\f\n\x\8\4\u\9\d\e\a\r\z\t\z\y\p\o\t\z\d\w\v\n\c\m\8\c\1\a\p\p\r\r\n\a\8\n\m\h\9\e\4\3\b\h\a\r\0\v\p\7\8\w\0\f\l\r\w\d\4\d\2\i\1\h\1\b\b\c\i\v\l\f\o\0\z\1\x\v\4\w\u\5\2\z\q\7\j\t\i\n\p\8\5\a\n\b\4\x\a\n\d\7\t\s\q\2\9\t\y\z\8\d\k\z\o\i\2\f\x\z\z\b\q\p\9\j\3\f\g\s\2\g\0\k\m\2\8\y\u\k\6\z\q\8\l\l\7\a\c\5\h\1\z\g\t\n\g\w\5\z\u\k\r\7\1\c\8\9\x\f\f\5\b\r\d\w\y\r\c\m\h\c\h\x\r\h\y\z\4\r\e\p\8\n\1\3\6\w\r\q\2\h\n\n\3\3\m\a\e\0\w\9\r\i\v\e\l\8\v\s\x\4\y\7\v\6\v\5\8\5\b\m\m\2\w\8\t\v\n\6\4\4\r\d\2\4\h\l\3\2\l\z\i\5\g\j\8\e\r\9\2\z\0\k\b\n\f\t\9\w\i\6\d\7\0\7\g\z\u\4\4\x\n\m\p\q\2\s\d\9\u\p\n\l\1\2\c\g\m\d\o\d\e\8\f\e\v\t\e\g\h\n\e\n\v\k\l\m\d\k\7\d\5\y\2\o\r\c\v\d\6\1\w\4\5\u\a\5\6\6\j\y\w\n\s\t\b\3\z\3\k\1\t\k\u\1\1\h\s\e\c\z\v\f\0\k\t\0\d\2\m\m\p\l\3\h\k\h\1\5\v\e\3\7\f\s\p\6\8\3\z\d\0\n\g\z\g\i\x\w\v\j\o\s\z\d\9\b\4\p\o\h\1\t\t\b\5\5\e\h\n\k\r\w\m\h\2\i\y\z\s\9\s\j\e\e\8\1\h\a\g\p\b\w\q\q\q\7\p\n\6\k\w\q\1\n\6\1\c\l\3\r\2\c\v\f\l\a\c\8\l\d\n\d\v\t\t\l\m\m\2\w\4\j\p\6\9\c ]] 00:06:22.041 00:06:22.041 real 0m1.402s 00:06:22.041 user 0m0.781s 00:06:22.041 sys 0m0.294s 00:06:22.041 18:04:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.041 ************************************ 00:06:22.041 END TEST dd_flag_nofollow_forced_aio 00:06:22.041 ************************************ 00:06:22.041 18:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.041 18:04:40 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:22.041 18:04:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.041 18:04:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.041 18:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.041 ************************************ 00:06:22.041 START TEST dd_flag_noatime_forced_aio 00:06:22.041 ************************************ 00:06:22.041 18:04:40 -- common/autotest_common.sh@1114 -- # noatime 00:06:22.041 18:04:40 -- dd/posix.sh@53 -- # local atime_if 00:06:22.041 18:04:40 -- dd/posix.sh@54 -- # local atime_of 00:06:22.041 18:04:40 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:22.041 18:04:40 -- dd/common.sh@98 -- # xtrace_disable 00:06:22.041 18:04:40 -- common/autotest_common.sh@10 -- # set +x 00:06:22.041 18:04:40 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.041 18:04:40 -- dd/posix.sh@60 -- # atime_if=1731953080 00:06:22.041 18:04:40 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.041 18:04:40 -- dd/posix.sh@61 -- # atime_of=1731953080 00:06:22.041 18:04:40 -- dd/posix.sh@66 -- # sleep 1 00:06:22.977 18:04:41 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.977 [2024-11-18 18:04:41.564091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.977 [2024-11-18 18:04:41.564190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58475 ] 00:06:23.248 [2024-11-18 18:04:41.709068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.248 [2024-11-18 18:04:41.788034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.248  [2024-11-18T18:04:42.125Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.521 00:06:23.521 18:04:42 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.521 18:04:42 -- dd/posix.sh@69 -- # (( atime_if == 1731953080 )) 00:06:23.521 18:04:42 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.521 18:04:42 -- dd/posix.sh@70 -- # (( atime_of == 1731953080 )) 00:06:23.521 18:04:42 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.521 [2024-11-18 18:04:42.073906] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.521 [2024-11-18 18:04:42.073992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:06:23.780 [2024-11-18 18:04:42.210308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.780 [2024-11-18 18:04:42.262115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.780  [2024-11-18T18:04:42.644Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.040 00:06:24.040 18:04:42 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.040 18:04:42 -- dd/posix.sh@73 -- # (( atime_if < 1731953082 )) 00:06:24.040 00:06:24.040 real 0m2.008s 00:06:24.040 user 0m0.553s 00:06:24.040 sys 0m0.218s 00:06:24.040 18:04:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.040 18:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 ************************************ 00:06:24.040 END TEST dd_flag_noatime_forced_aio 00:06:24.040 ************************************ 00:06:24.040 18:04:42 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:24.040 18:04:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.040 18:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.040 18:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 ************************************ 00:06:24.040 START TEST dd_flags_misc_forced_aio 00:06:24.040 ************************************ 00:06:24.040 18:04:42 -- common/autotest_common.sh@1114 -- # io 00:06:24.040 18:04:42 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:24.040 18:04:42 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:24.040 18:04:42 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:24.040 18:04:42 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:24.040 18:04:42 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:24.040 18:04:42 -- dd/common.sh@98 -- # xtrace_disable 00:06:24.040 18:04:42 -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 18:04:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.040 18:04:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.040 [2024-11-18 18:04:42.602477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.040 [2024-11-18 18:04:42.602593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58513 ] 00:06:24.300 [2024-11-18 18:04:42.738885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.300 [2024-11-18 18:04:42.796746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.300  [2024-11-18T18:04:43.163Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.559 00:06:24.559 18:04:43 -- dd/posix.sh@93 -- # [[ j4g0r3348r8vl7fc9ebi5mhk2f8hxmah3ydoixlh3v3zrz3euk2yqkfjjwuiheweuiosrxjzyhg021o9mro7jj7wkn79v9pfga48zustbs1rh25b7w96vrh11c13h49uyytackovbv7rql5fnvsyx1q9fnyz1am1oa8mzbqu82rpragsbzfmrscf2xvg1zdjh5wxtlpaj0ot9spbeivnhfontwrufga0jdbnq0zyj0k3unuj6atunos4ak0qqaiuiyrbt1tkvux72hy4d2ebx28wc22o9ziypiv650iz94jri977mtzo3odx2jbqea823l615nzvhosygk0ljw032i2iesp0yojb98xc6qlupgytw1qcgwm0mpyqgtpvg0a2wgiyc45nuch3wp7ysn5lwjuc46mz43pm8hsh3q0e52zlwuyf9o638aqezkqqud3tndfs2l6se1dyeoqqwmtiq43kcrtilw053xepyxagmk22mplpt0qvfj611c3gxib9 == \j\4\g\0\r\3\3\4\8\r\8\v\l\7\f\c\9\e\b\i\5\m\h\k\2\f\8\h\x\m\a\h\3\y\d\o\i\x\l\h\3\v\3\z\r\z\3\e\u\k\2\y\q\k\f\j\j\w\u\i\h\e\w\e\u\i\o\s\r\x\j\z\y\h\g\0\2\1\o\9\m\r\o\7\j\j\7\w\k\n\7\9\v\9\p\f\g\a\4\8\z\u\s\t\b\s\1\r\h\2\5\b\7\w\9\6\v\r\h\1\1\c\1\3\h\4\9\u\y\y\t\a\c\k\o\v\b\v\7\r\q\l\5\f\n\v\s\y\x\1\q\9\f\n\y\z\1\a\m\1\o\a\8\m\z\b\q\u\8\2\r\p\r\a\g\s\b\z\f\m\r\s\c\f\2\x\v\g\1\z\d\j\h\5\w\x\t\l\p\a\j\0\o\t\9\s\p\b\e\i\v\n\h\f\o\n\t\w\r\u\f\g\a\0\j\d\b\n\q\0\z\y\j\0\k\3\u\n\u\j\6\a\t\u\n\o\s\4\a\k\0\q\q\a\i\u\i\y\r\b\t\1\t\k\v\u\x\7\2\h\y\4\d\2\e\b\x\2\8\w\c\2\2\o\9\z\i\y\p\i\v\6\5\0\i\z\9\4\j\r\i\9\7\7\m\t\z\o\3\o\d\x\2\j\b\q\e\a\8\2\3\l\6\1\5\n\z\v\h\o\s\y\g\k\0\l\j\w\0\3\2\i\2\i\e\s\p\0\y\o\j\b\9\8\x\c\6\q\l\u\p\g\y\t\w\1\q\c\g\w\m\0\m\p\y\q\g\t\p\v\g\0\a\2\w\g\i\y\c\4\5\n\u\c\h\3\w\p\7\y\s\n\5\l\w\j\u\c\4\6\m\z\4\3\p\m\8\h\s\h\3\q\0\e\5\2\z\l\w\u\y\f\9\o\6\3\8\a\q\e\z\k\q\q\u\d\3\t\n\d\f\s\2\l\6\s\e\1\d\y\e\o\q\q\w\m\t\i\q\4\3\k\c\r\t\i\l\w\0\5\3\x\e\p\y\x\a\g\m\k\2\2\m\p\l\p\t\0\q\v\f\j\6\1\1\c\3\g\x\i\b\9 ]] 00:06:24.559 18:04:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.559 18:04:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:24.559 [2024-11-18 18:04:43.082248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.559 [2024-11-18 18:04:43.082339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58515 ] 00:06:24.819 [2024-11-18 18:04:43.218393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.819 [2024-11-18 18:04:43.271022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.819  [2024-11-18T18:04:43.683Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.079 00:06:25.080 18:04:43 -- dd/posix.sh@93 -- # [[ j4g0r3348r8vl7fc9ebi5mhk2f8hxmah3ydoixlh3v3zrz3euk2yqkfjjwuiheweuiosrxjzyhg021o9mro7jj7wkn79v9pfga48zustbs1rh25b7w96vrh11c13h49uyytackovbv7rql5fnvsyx1q9fnyz1am1oa8mzbqu82rpragsbzfmrscf2xvg1zdjh5wxtlpaj0ot9spbeivnhfontwrufga0jdbnq0zyj0k3unuj6atunos4ak0qqaiuiyrbt1tkvux72hy4d2ebx28wc22o9ziypiv650iz94jri977mtzo3odx2jbqea823l615nzvhosygk0ljw032i2iesp0yojb98xc6qlupgytw1qcgwm0mpyqgtpvg0a2wgiyc45nuch3wp7ysn5lwjuc46mz43pm8hsh3q0e52zlwuyf9o638aqezkqqud3tndfs2l6se1dyeoqqwmtiq43kcrtilw053xepyxagmk22mplpt0qvfj611c3gxib9 == \j\4\g\0\r\3\3\4\8\r\8\v\l\7\f\c\9\e\b\i\5\m\h\k\2\f\8\h\x\m\a\h\3\y\d\o\i\x\l\h\3\v\3\z\r\z\3\e\u\k\2\y\q\k\f\j\j\w\u\i\h\e\w\e\u\i\o\s\r\x\j\z\y\h\g\0\2\1\o\9\m\r\o\7\j\j\7\w\k\n\7\9\v\9\p\f\g\a\4\8\z\u\s\t\b\s\1\r\h\2\5\b\7\w\9\6\v\r\h\1\1\c\1\3\h\4\9\u\y\y\t\a\c\k\o\v\b\v\7\r\q\l\5\f\n\v\s\y\x\1\q\9\f\n\y\z\1\a\m\1\o\a\8\m\z\b\q\u\8\2\r\p\r\a\g\s\b\z\f\m\r\s\c\f\2\x\v\g\1\z\d\j\h\5\w\x\t\l\p\a\j\0\o\t\9\s\p\b\e\i\v\n\h\f\o\n\t\w\r\u\f\g\a\0\j\d\b\n\q\0\z\y\j\0\k\3\u\n\u\j\6\a\t\u\n\o\s\4\a\k\0\q\q\a\i\u\i\y\r\b\t\1\t\k\v\u\x\7\2\h\y\4\d\2\e\b\x\2\8\w\c\2\2\o\9\z\i\y\p\i\v\6\5\0\i\z\9\4\j\r\i\9\7\7\m\t\z\o\3\o\d\x\2\j\b\q\e\a\8\2\3\l\6\1\5\n\z\v\h\o\s\y\g\k\0\l\j\w\0\3\2\i\2\i\e\s\p\0\y\o\j\b\9\8\x\c\6\q\l\u\p\g\y\t\w\1\q\c\g\w\m\0\m\p\y\q\g\t\p\v\g\0\a\2\w\g\i\y\c\4\5\n\u\c\h\3\w\p\7\y\s\n\5\l\w\j\u\c\4\6\m\z\4\3\p\m\8\h\s\h\3\q\0\e\5\2\z\l\w\u\y\f\9\o\6\3\8\a\q\e\z\k\q\q\u\d\3\t\n\d\f\s\2\l\6\s\e\1\d\y\e\o\q\q\w\m\t\i\q\4\3\k\c\r\t\i\l\w\0\5\3\x\e\p\y\x\a\g\m\k\2\2\m\p\l\p\t\0\q\v\f\j\6\1\1\c\3\g\x\i\b\9 ]] 00:06:25.080 18:04:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.080 18:04:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:25.080 [2024-11-18 18:04:43.540676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.080 [2024-11-18 18:04:43.540776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58528 ] 00:06:25.080 [2024-11-18 18:04:43.675800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.339 [2024-11-18 18:04:43.724862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.339  [2024-11-18T18:04:43.943Z] Copying: 512/512 [B] (average 166 kBps) 00:06:25.339 00:06:25.339 18:04:43 -- dd/posix.sh@93 -- # [[ j4g0r3348r8vl7fc9ebi5mhk2f8hxmah3ydoixlh3v3zrz3euk2yqkfjjwuiheweuiosrxjzyhg021o9mro7jj7wkn79v9pfga48zustbs1rh25b7w96vrh11c13h49uyytackovbv7rql5fnvsyx1q9fnyz1am1oa8mzbqu82rpragsbzfmrscf2xvg1zdjh5wxtlpaj0ot9spbeivnhfontwrufga0jdbnq0zyj0k3unuj6atunos4ak0qqaiuiyrbt1tkvux72hy4d2ebx28wc22o9ziypiv650iz94jri977mtzo3odx2jbqea823l615nzvhosygk0ljw032i2iesp0yojb98xc6qlupgytw1qcgwm0mpyqgtpvg0a2wgiyc45nuch3wp7ysn5lwjuc46mz43pm8hsh3q0e52zlwuyf9o638aqezkqqud3tndfs2l6se1dyeoqqwmtiq43kcrtilw053xepyxagmk22mplpt0qvfj611c3gxib9 == \j\4\g\0\r\3\3\4\8\r\8\v\l\7\f\c\9\e\b\i\5\m\h\k\2\f\8\h\x\m\a\h\3\y\d\o\i\x\l\h\3\v\3\z\r\z\3\e\u\k\2\y\q\k\f\j\j\w\u\i\h\e\w\e\u\i\o\s\r\x\j\z\y\h\g\0\2\1\o\9\m\r\o\7\j\j\7\w\k\n\7\9\v\9\p\f\g\a\4\8\z\u\s\t\b\s\1\r\h\2\5\b\7\w\9\6\v\r\h\1\1\c\1\3\h\4\9\u\y\y\t\a\c\k\o\v\b\v\7\r\q\l\5\f\n\v\s\y\x\1\q\9\f\n\y\z\1\a\m\1\o\a\8\m\z\b\q\u\8\2\r\p\r\a\g\s\b\z\f\m\r\s\c\f\2\x\v\g\1\z\d\j\h\5\w\x\t\l\p\a\j\0\o\t\9\s\p\b\e\i\v\n\h\f\o\n\t\w\r\u\f\g\a\0\j\d\b\n\q\0\z\y\j\0\k\3\u\n\u\j\6\a\t\u\n\o\s\4\a\k\0\q\q\a\i\u\i\y\r\b\t\1\t\k\v\u\x\7\2\h\y\4\d\2\e\b\x\2\8\w\c\2\2\o\9\z\i\y\p\i\v\6\5\0\i\z\9\4\j\r\i\9\7\7\m\t\z\o\3\o\d\x\2\j\b\q\e\a\8\2\3\l\6\1\5\n\z\v\h\o\s\y\g\k\0\l\j\w\0\3\2\i\2\i\e\s\p\0\y\o\j\b\9\8\x\c\6\q\l\u\p\g\y\t\w\1\q\c\g\w\m\0\m\p\y\q\g\t\p\v\g\0\a\2\w\g\i\y\c\4\5\n\u\c\h\3\w\p\7\y\s\n\5\l\w\j\u\c\4\6\m\z\4\3\p\m\8\h\s\h\3\q\0\e\5\2\z\l\w\u\y\f\9\o\6\3\8\a\q\e\z\k\q\q\u\d\3\t\n\d\f\s\2\l\6\s\e\1\d\y\e\o\q\q\w\m\t\i\q\4\3\k\c\r\t\i\l\w\0\5\3\x\e\p\y\x\a\g\m\k\2\2\m\p\l\p\t\0\q\v\f\j\6\1\1\c\3\g\x\i\b\9 ]] 00:06:25.339 18:04:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.339 18:04:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:25.598 [2024-11-18 18:04:43.985209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.598 [2024-11-18 18:04:43.985304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58530 ] 00:06:25.598 [2024-11-18 18:04:44.122960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.598 [2024-11-18 18:04:44.192700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.857  [2024-11-18T18:04:44.461Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.857 00:06:25.857 18:04:44 -- dd/posix.sh@93 -- # [[ j4g0r3348r8vl7fc9ebi5mhk2f8hxmah3ydoixlh3v3zrz3euk2yqkfjjwuiheweuiosrxjzyhg021o9mro7jj7wkn79v9pfga48zustbs1rh25b7w96vrh11c13h49uyytackovbv7rql5fnvsyx1q9fnyz1am1oa8mzbqu82rpragsbzfmrscf2xvg1zdjh5wxtlpaj0ot9spbeivnhfontwrufga0jdbnq0zyj0k3unuj6atunos4ak0qqaiuiyrbt1tkvux72hy4d2ebx28wc22o9ziypiv650iz94jri977mtzo3odx2jbqea823l615nzvhosygk0ljw032i2iesp0yojb98xc6qlupgytw1qcgwm0mpyqgtpvg0a2wgiyc45nuch3wp7ysn5lwjuc46mz43pm8hsh3q0e52zlwuyf9o638aqezkqqud3tndfs2l6se1dyeoqqwmtiq43kcrtilw053xepyxagmk22mplpt0qvfj611c3gxib9 == \j\4\g\0\r\3\3\4\8\r\8\v\l\7\f\c\9\e\b\i\5\m\h\k\2\f\8\h\x\m\a\h\3\y\d\o\i\x\l\h\3\v\3\z\r\z\3\e\u\k\2\y\q\k\f\j\j\w\u\i\h\e\w\e\u\i\o\s\r\x\j\z\y\h\g\0\2\1\o\9\m\r\o\7\j\j\7\w\k\n\7\9\v\9\p\f\g\a\4\8\z\u\s\t\b\s\1\r\h\2\5\b\7\w\9\6\v\r\h\1\1\c\1\3\h\4\9\u\y\y\t\a\c\k\o\v\b\v\7\r\q\l\5\f\n\v\s\y\x\1\q\9\f\n\y\z\1\a\m\1\o\a\8\m\z\b\q\u\8\2\r\p\r\a\g\s\b\z\f\m\r\s\c\f\2\x\v\g\1\z\d\j\h\5\w\x\t\l\p\a\j\0\o\t\9\s\p\b\e\i\v\n\h\f\o\n\t\w\r\u\f\g\a\0\j\d\b\n\q\0\z\y\j\0\k\3\u\n\u\j\6\a\t\u\n\o\s\4\a\k\0\q\q\a\i\u\i\y\r\b\t\1\t\k\v\u\x\7\2\h\y\4\d\2\e\b\x\2\8\w\c\2\2\o\9\z\i\y\p\i\v\6\5\0\i\z\9\4\j\r\i\9\7\7\m\t\z\o\3\o\d\x\2\j\b\q\e\a\8\2\3\l\6\1\5\n\z\v\h\o\s\y\g\k\0\l\j\w\0\3\2\i\2\i\e\s\p\0\y\o\j\b\9\8\x\c\6\q\l\u\p\g\y\t\w\1\q\c\g\w\m\0\m\p\y\q\g\t\p\v\g\0\a\2\w\g\i\y\c\4\5\n\u\c\h\3\w\p\7\y\s\n\5\l\w\j\u\c\4\6\m\z\4\3\p\m\8\h\s\h\3\q\0\e\5\2\z\l\w\u\y\f\9\o\6\3\8\a\q\e\z\k\q\q\u\d\3\t\n\d\f\s\2\l\6\s\e\1\d\y\e\o\q\q\w\m\t\i\q\4\3\k\c\r\t\i\l\w\0\5\3\x\e\p\y\x\a\g\m\k\2\2\m\p\l\p\t\0\q\v\f\j\6\1\1\c\3\g\x\i\b\9 ]] 00:06:25.857 18:04:44 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:25.857 18:04:44 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:25.857 18:04:44 -- dd/common.sh@98 -- # xtrace_disable 00:06:25.857 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.857 18:04:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.857 18:04:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:26.117 [2024-11-18 18:04:44.491871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.117 [2024-11-18 18:04:44.491965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:06:26.117 [2024-11-18 18:04:44.622700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.117 [2024-11-18 18:04:44.676679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.376  [2024-11-18T18:04:44.980Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.376 00:06:26.376 18:04:44 -- dd/posix.sh@93 -- # [[ qrioh3swnk18e8c35iec97bcod31ui6pdgpeecqgompayid85qgoimtu8shcg9rw70zbeseil7cm5ff8ma7o6oqm9jbkl8pcy2fh36sp5icogh149q6wp7g5ozilflzileuxj6mcnbjob4alct8mxr7aviapestnpczaqm3hpzohi95kg6o32a4dhwzohkngw1j6cwgcazijkfm5hfdgkejycu2qtv5x503fusoydnz2iypextdyutwmdy0ogtsec7n2u65uic7rznr848t9wq0n4sklx8v1cs49m016im2dj8chozmdt6eswxn7l1ozlkm668uaa6usul75xl6rlmfcdtfrlx8saud4fgfhoa93ihmlvvdnks7ykyg0jeihzts0kvs38w1u1udzk3us4jzbnvv8jvpshbt8ms1wemxtjn2ckir1xr1vnsjoclwy6sh70svw8c988tkljvhtr1tknr39z13ydvfmcq1jz82os98atkii8soiv9c5f8iz == \q\r\i\o\h\3\s\w\n\k\1\8\e\8\c\3\5\i\e\c\9\7\b\c\o\d\3\1\u\i\6\p\d\g\p\e\e\c\q\g\o\m\p\a\y\i\d\8\5\q\g\o\i\m\t\u\8\s\h\c\g\9\r\w\7\0\z\b\e\s\e\i\l\7\c\m\5\f\f\8\m\a\7\o\6\o\q\m\9\j\b\k\l\8\p\c\y\2\f\h\3\6\s\p\5\i\c\o\g\h\1\4\9\q\6\w\p\7\g\5\o\z\i\l\f\l\z\i\l\e\u\x\j\6\m\c\n\b\j\o\b\4\a\l\c\t\8\m\x\r\7\a\v\i\a\p\e\s\t\n\p\c\z\a\q\m\3\h\p\z\o\h\i\9\5\k\g\6\o\3\2\a\4\d\h\w\z\o\h\k\n\g\w\1\j\6\c\w\g\c\a\z\i\j\k\f\m\5\h\f\d\g\k\e\j\y\c\u\2\q\t\v\5\x\5\0\3\f\u\s\o\y\d\n\z\2\i\y\p\e\x\t\d\y\u\t\w\m\d\y\0\o\g\t\s\e\c\7\n\2\u\6\5\u\i\c\7\r\z\n\r\8\4\8\t\9\w\q\0\n\4\s\k\l\x\8\v\1\c\s\4\9\m\0\1\6\i\m\2\d\j\8\c\h\o\z\m\d\t\6\e\s\w\x\n\7\l\1\o\z\l\k\m\6\6\8\u\a\a\6\u\s\u\l\7\5\x\l\6\r\l\m\f\c\d\t\f\r\l\x\8\s\a\u\d\4\f\g\f\h\o\a\9\3\i\h\m\l\v\v\d\n\k\s\7\y\k\y\g\0\j\e\i\h\z\t\s\0\k\v\s\3\8\w\1\u\1\u\d\z\k\3\u\s\4\j\z\b\n\v\v\8\j\v\p\s\h\b\t\8\m\s\1\w\e\m\x\t\j\n\2\c\k\i\r\1\x\r\1\v\n\s\j\o\c\l\w\y\6\s\h\7\0\s\v\w\8\c\9\8\8\t\k\l\j\v\h\t\r\1\t\k\n\r\3\9\z\1\3\y\d\v\f\m\c\q\1\j\z\8\2\o\s\9\8\a\t\k\i\i\8\s\o\i\v\9\c\5\f\8\i\z ]] 00:06:26.376 18:04:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.376 18:04:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:26.376 [2024-11-18 18:04:44.946766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.376 [2024-11-18 18:04:44.946883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58545 ] 00:06:26.635 [2024-11-18 18:04:45.086095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.635 [2024-11-18 18:04:45.146399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.635  [2024-11-18T18:04:45.497Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.893 00:06:26.893 18:04:45 -- dd/posix.sh@93 -- # [[ qrioh3swnk18e8c35iec97bcod31ui6pdgpeecqgompayid85qgoimtu8shcg9rw70zbeseil7cm5ff8ma7o6oqm9jbkl8pcy2fh36sp5icogh149q6wp7g5ozilflzileuxj6mcnbjob4alct8mxr7aviapestnpczaqm3hpzohi95kg6o32a4dhwzohkngw1j6cwgcazijkfm5hfdgkejycu2qtv5x503fusoydnz2iypextdyutwmdy0ogtsec7n2u65uic7rznr848t9wq0n4sklx8v1cs49m016im2dj8chozmdt6eswxn7l1ozlkm668uaa6usul75xl6rlmfcdtfrlx8saud4fgfhoa93ihmlvvdnks7ykyg0jeihzts0kvs38w1u1udzk3us4jzbnvv8jvpshbt8ms1wemxtjn2ckir1xr1vnsjoclwy6sh70svw8c988tkljvhtr1tknr39z13ydvfmcq1jz82os98atkii8soiv9c5f8iz == \q\r\i\o\h\3\s\w\n\k\1\8\e\8\c\3\5\i\e\c\9\7\b\c\o\d\3\1\u\i\6\p\d\g\p\e\e\c\q\g\o\m\p\a\y\i\d\8\5\q\g\o\i\m\t\u\8\s\h\c\g\9\r\w\7\0\z\b\e\s\e\i\l\7\c\m\5\f\f\8\m\a\7\o\6\o\q\m\9\j\b\k\l\8\p\c\y\2\f\h\3\6\s\p\5\i\c\o\g\h\1\4\9\q\6\w\p\7\g\5\o\z\i\l\f\l\z\i\l\e\u\x\j\6\m\c\n\b\j\o\b\4\a\l\c\t\8\m\x\r\7\a\v\i\a\p\e\s\t\n\p\c\z\a\q\m\3\h\p\z\o\h\i\9\5\k\g\6\o\3\2\a\4\d\h\w\z\o\h\k\n\g\w\1\j\6\c\w\g\c\a\z\i\j\k\f\m\5\h\f\d\g\k\e\j\y\c\u\2\q\t\v\5\x\5\0\3\f\u\s\o\y\d\n\z\2\i\y\p\e\x\t\d\y\u\t\w\m\d\y\0\o\g\t\s\e\c\7\n\2\u\6\5\u\i\c\7\r\z\n\r\8\4\8\t\9\w\q\0\n\4\s\k\l\x\8\v\1\c\s\4\9\m\0\1\6\i\m\2\d\j\8\c\h\o\z\m\d\t\6\e\s\w\x\n\7\l\1\o\z\l\k\m\6\6\8\u\a\a\6\u\s\u\l\7\5\x\l\6\r\l\m\f\c\d\t\f\r\l\x\8\s\a\u\d\4\f\g\f\h\o\a\9\3\i\h\m\l\v\v\d\n\k\s\7\y\k\y\g\0\j\e\i\h\z\t\s\0\k\v\s\3\8\w\1\u\1\u\d\z\k\3\u\s\4\j\z\b\n\v\v\8\j\v\p\s\h\b\t\8\m\s\1\w\e\m\x\t\j\n\2\c\k\i\r\1\x\r\1\v\n\s\j\o\c\l\w\y\6\s\h\7\0\s\v\w\8\c\9\8\8\t\k\l\j\v\h\t\r\1\t\k\n\r\3\9\z\1\3\y\d\v\f\m\c\q\1\j\z\8\2\o\s\9\8\a\t\k\i\i\8\s\o\i\v\9\c\5\f\8\i\z ]] 00:06:26.893 18:04:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.893 18:04:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:26.893 [2024-11-18 18:04:45.420099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.893 [2024-11-18 18:04:45.420192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58552 ] 00:06:27.152 [2024-11-18 18:04:45.551682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.152 [2024-11-18 18:04:45.612513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.152  [2024-11-18T18:04:46.016Z] Copying: 512/512 [B] (average 500 kBps) 00:06:27.412 00:06:27.412 18:04:45 -- dd/posix.sh@93 -- # [[ qrioh3swnk18e8c35iec97bcod31ui6pdgpeecqgompayid85qgoimtu8shcg9rw70zbeseil7cm5ff8ma7o6oqm9jbkl8pcy2fh36sp5icogh149q6wp7g5ozilflzileuxj6mcnbjob4alct8mxr7aviapestnpczaqm3hpzohi95kg6o32a4dhwzohkngw1j6cwgcazijkfm5hfdgkejycu2qtv5x503fusoydnz2iypextdyutwmdy0ogtsec7n2u65uic7rznr848t9wq0n4sklx8v1cs49m016im2dj8chozmdt6eswxn7l1ozlkm668uaa6usul75xl6rlmfcdtfrlx8saud4fgfhoa93ihmlvvdnks7ykyg0jeihzts0kvs38w1u1udzk3us4jzbnvv8jvpshbt8ms1wemxtjn2ckir1xr1vnsjoclwy6sh70svw8c988tkljvhtr1tknr39z13ydvfmcq1jz82os98atkii8soiv9c5f8iz == \q\r\i\o\h\3\s\w\n\k\1\8\e\8\c\3\5\i\e\c\9\7\b\c\o\d\3\1\u\i\6\p\d\g\p\e\e\c\q\g\o\m\p\a\y\i\d\8\5\q\g\o\i\m\t\u\8\s\h\c\g\9\r\w\7\0\z\b\e\s\e\i\l\7\c\m\5\f\f\8\m\a\7\o\6\o\q\m\9\j\b\k\l\8\p\c\y\2\f\h\3\6\s\p\5\i\c\o\g\h\1\4\9\q\6\w\p\7\g\5\o\z\i\l\f\l\z\i\l\e\u\x\j\6\m\c\n\b\j\o\b\4\a\l\c\t\8\m\x\r\7\a\v\i\a\p\e\s\t\n\p\c\z\a\q\m\3\h\p\z\o\h\i\9\5\k\g\6\o\3\2\a\4\d\h\w\z\o\h\k\n\g\w\1\j\6\c\w\g\c\a\z\i\j\k\f\m\5\h\f\d\g\k\e\j\y\c\u\2\q\t\v\5\x\5\0\3\f\u\s\o\y\d\n\z\2\i\y\p\e\x\t\d\y\u\t\w\m\d\y\0\o\g\t\s\e\c\7\n\2\u\6\5\u\i\c\7\r\z\n\r\8\4\8\t\9\w\q\0\n\4\s\k\l\x\8\v\1\c\s\4\9\m\0\1\6\i\m\2\d\j\8\c\h\o\z\m\d\t\6\e\s\w\x\n\7\l\1\o\z\l\k\m\6\6\8\u\a\a\6\u\s\u\l\7\5\x\l\6\r\l\m\f\c\d\t\f\r\l\x\8\s\a\u\d\4\f\g\f\h\o\a\9\3\i\h\m\l\v\v\d\n\k\s\7\y\k\y\g\0\j\e\i\h\z\t\s\0\k\v\s\3\8\w\1\u\1\u\d\z\k\3\u\s\4\j\z\b\n\v\v\8\j\v\p\s\h\b\t\8\m\s\1\w\e\m\x\t\j\n\2\c\k\i\r\1\x\r\1\v\n\s\j\o\c\l\w\y\6\s\h\7\0\s\v\w\8\c\9\8\8\t\k\l\j\v\h\t\r\1\t\k\n\r\3\9\z\1\3\y\d\v\f\m\c\q\1\j\z\8\2\o\s\9\8\a\t\k\i\i\8\s\o\i\v\9\c\5\f\8\i\z ]] 00:06:27.412 18:04:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.412 18:04:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:27.412 [2024-11-18 18:04:45.899140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.412 [2024-11-18 18:04:45.899237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58560 ] 00:06:27.674 [2024-11-18 18:04:46.037439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.674 [2024-11-18 18:04:46.093493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.674  [2024-11-18T18:04:46.541Z] Copying: 512/512 [B] (average 250 kBps) 00:06:27.937 00:06:27.937 18:04:46 -- dd/posix.sh@93 -- # [[ qrioh3swnk18e8c35iec97bcod31ui6pdgpeecqgompayid85qgoimtu8shcg9rw70zbeseil7cm5ff8ma7o6oqm9jbkl8pcy2fh36sp5icogh149q6wp7g5ozilflzileuxj6mcnbjob4alct8mxr7aviapestnpczaqm3hpzohi95kg6o32a4dhwzohkngw1j6cwgcazijkfm5hfdgkejycu2qtv5x503fusoydnz2iypextdyutwmdy0ogtsec7n2u65uic7rznr848t9wq0n4sklx8v1cs49m016im2dj8chozmdt6eswxn7l1ozlkm668uaa6usul75xl6rlmfcdtfrlx8saud4fgfhoa93ihmlvvdnks7ykyg0jeihzts0kvs38w1u1udzk3us4jzbnvv8jvpshbt8ms1wemxtjn2ckir1xr1vnsjoclwy6sh70svw8c988tkljvhtr1tknr39z13ydvfmcq1jz82os98atkii8soiv9c5f8iz == \q\r\i\o\h\3\s\w\n\k\1\8\e\8\c\3\5\i\e\c\9\7\b\c\o\d\3\1\u\i\6\p\d\g\p\e\e\c\q\g\o\m\p\a\y\i\d\8\5\q\g\o\i\m\t\u\8\s\h\c\g\9\r\w\7\0\z\b\e\s\e\i\l\7\c\m\5\f\f\8\m\a\7\o\6\o\q\m\9\j\b\k\l\8\p\c\y\2\f\h\3\6\s\p\5\i\c\o\g\h\1\4\9\q\6\w\p\7\g\5\o\z\i\l\f\l\z\i\l\e\u\x\j\6\m\c\n\b\j\o\b\4\a\l\c\t\8\m\x\r\7\a\v\i\a\p\e\s\t\n\p\c\z\a\q\m\3\h\p\z\o\h\i\9\5\k\g\6\o\3\2\a\4\d\h\w\z\o\h\k\n\g\w\1\j\6\c\w\g\c\a\z\i\j\k\f\m\5\h\f\d\g\k\e\j\y\c\u\2\q\t\v\5\x\5\0\3\f\u\s\o\y\d\n\z\2\i\y\p\e\x\t\d\y\u\t\w\m\d\y\0\o\g\t\s\e\c\7\n\2\u\6\5\u\i\c\7\r\z\n\r\8\4\8\t\9\w\q\0\n\4\s\k\l\x\8\v\1\c\s\4\9\m\0\1\6\i\m\2\d\j\8\c\h\o\z\m\d\t\6\e\s\w\x\n\7\l\1\o\z\l\k\m\6\6\8\u\a\a\6\u\s\u\l\7\5\x\l\6\r\l\m\f\c\d\t\f\r\l\x\8\s\a\u\d\4\f\g\f\h\o\a\9\3\i\h\m\l\v\v\d\n\k\s\7\y\k\y\g\0\j\e\i\h\z\t\s\0\k\v\s\3\8\w\1\u\1\u\d\z\k\3\u\s\4\j\z\b\n\v\v\8\j\v\p\s\h\b\t\8\m\s\1\w\e\m\x\t\j\n\2\c\k\i\r\1\x\r\1\v\n\s\j\o\c\l\w\y\6\s\h\7\0\s\v\w\8\c\9\8\8\t\k\l\j\v\h\t\r\1\t\k\n\r\3\9\z\1\3\y\d\v\f\m\c\q\1\j\z\8\2\o\s\9\8\a\t\k\i\i\8\s\o\i\v\9\c\5\f\8\i\z ]] 00:06:27.937 00:06:27.937 real 0m3.780s 00:06:27.937 user 0m2.055s 00:06:27.937 sys 0m0.743s 00:06:27.937 ************************************ 00:06:27.937 END TEST dd_flags_misc_forced_aio 00:06:27.937 ************************************ 00:06:27.937 18:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.937 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.937 18:04:46 -- dd/posix.sh@1 -- # cleanup 00:06:27.937 18:04:46 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:27.937 18:04:46 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:27.937 00:06:27.937 real 0m17.959s 00:06:27.937 user 0m8.622s 00:06:27.937 sys 0m3.497s 00:06:27.937 18:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.937 ************************************ 00:06:27.937 END TEST spdk_dd_posix 00:06:27.937 ************************************ 00:06:27.937 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.937 18:04:46 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.937 18:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.937 18:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.937 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.937 ************************************ 00:06:27.937 START TEST spdk_dd_malloc 00:06:27.937 ************************************ 00:06:27.937 18:04:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:27.937 * Looking for test storage... 00:06:27.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.937 18:04:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.937 18:04:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.937 18:04:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:28.197 18:04:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:28.197 18:04:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:28.197 18:04:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:28.197 18:04:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:28.197 18:04:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:28.197 18:04:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.197 18:04:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:28.197 18:04:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:28.197 18:04:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:28.197 18:04:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:28.197 18:04:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:28.197 18:04:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:28.197 18:04:46 -- scripts/common.sh@344 -- # : 1 00:06:28.197 18:04:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:28.197 18:04:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.197 18:04:46 -- scripts/common.sh@364 -- # decimal 1 00:06:28.197 18:04:46 -- scripts/common.sh@352 -- # local d=1 00:06:28.197 18:04:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.197 18:04:46 -- scripts/common.sh@354 -- # echo 1 00:06:28.197 18:04:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:28.197 18:04:46 -- scripts/common.sh@365 -- # decimal 2 00:06:28.197 18:04:46 -- scripts/common.sh@352 -- # local d=2 00:06:28.197 18:04:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.197 18:04:46 -- scripts/common.sh@354 -- # echo 2 00:06:28.197 18:04:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:28.197 18:04:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:28.197 18:04:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:28.197 18:04:46 -- scripts/common.sh@367 -- # return 0 00:06:28.197 18:04:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:28.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.197 --rc genhtml_branch_coverage=1 00:06:28.197 --rc genhtml_function_coverage=1 00:06:28.197 --rc genhtml_legend=1 00:06:28.197 --rc geninfo_all_blocks=1 00:06:28.197 --rc geninfo_unexecuted_blocks=1 00:06:28.197 00:06:28.197 ' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:28.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.197 --rc genhtml_branch_coverage=1 00:06:28.197 --rc genhtml_function_coverage=1 00:06:28.197 --rc genhtml_legend=1 00:06:28.197 --rc geninfo_all_blocks=1 00:06:28.197 --rc geninfo_unexecuted_blocks=1 00:06:28.197 00:06:28.197 ' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:28.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.197 --rc genhtml_branch_coverage=1 00:06:28.197 --rc genhtml_function_coverage=1 00:06:28.197 --rc genhtml_legend=1 00:06:28.197 --rc geninfo_all_blocks=1 00:06:28.197 --rc geninfo_unexecuted_blocks=1 00:06:28.197 00:06:28.197 ' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:28.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.197 --rc genhtml_branch_coverage=1 00:06:28.197 --rc genhtml_function_coverage=1 00:06:28.197 --rc genhtml_legend=1 00:06:28.197 --rc geninfo_all_blocks=1 00:06:28.197 --rc geninfo_unexecuted_blocks=1 00:06:28.197 00:06:28.197 ' 00:06:28.197 18:04:46 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.197 18:04:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.197 18:04:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.197 18:04:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.197 18:04:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.197 18:04:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.197 18:04:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.197 18:04:46 -- paths/export.sh@5 -- # export PATH 00:06:28.197 18:04:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.197 18:04:46 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:28.197 18:04:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.197 18:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.197 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:28.197 ************************************ 00:06:28.197 START TEST dd_malloc_copy 00:06:28.197 ************************************ 00:06:28.197 18:04:46 -- common/autotest_common.sh@1114 -- # malloc_copy 00:06:28.197 18:04:46 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:28.197 18:04:46 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:28.197 18:04:46 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:28.197 18:04:46 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:28.197 18:04:46 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:28.197 18:04:46 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:28.197 18:04:46 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:28.197 18:04:46 -- dd/malloc.sh@28 -- # gen_conf 00:06:28.197 18:04:46 -- dd/common.sh@31 -- # xtrace_disable 00:06:28.197 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:06:28.197 [2024-11-18 18:04:46.661770] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.197 [2024-11-18 18:04:46.662043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:06:28.197 { 00:06:28.197 "subsystems": [ 00:06:28.197 { 00:06:28.198 "subsystem": "bdev", 00:06:28.198 "config": [ 00:06:28.198 { 00:06:28.198 "params": { 00:06:28.198 "block_size": 512, 00:06:28.198 "num_blocks": 1048576, 00:06:28.198 "name": "malloc0" 00:06:28.198 }, 00:06:28.198 "method": "bdev_malloc_create" 00:06:28.198 }, 00:06:28.198 { 00:06:28.198 "params": { 00:06:28.198 "block_size": 512, 00:06:28.198 "num_blocks": 1048576, 00:06:28.198 "name": "malloc1" 00:06:28.198 }, 00:06:28.198 "method": "bdev_malloc_create" 00:06:28.198 }, 00:06:28.198 { 00:06:28.198 "method": "bdev_wait_for_examine" 00:06:28.198 } 00:06:28.198 ] 00:06:28.198 } 00:06:28.198 ] 00:06:28.198 } 00:06:28.198 [2024-11-18 18:04:46.797843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.457 [2024-11-18 18:04:46.859237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.835  [2024-11-18T18:04:49.379Z] Copying: 231/512 [MB] (231 MBps) [2024-11-18T18:04:49.379Z] Copying: 469/512 [MB] (237 MBps) [2024-11-18T18:04:49.640Z] Copying: 512/512 [MB] (average 234 MBps) 00:06:31.036 00:06:31.036 18:04:49 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:31.036 18:04:49 -- dd/malloc.sh@33 -- # gen_conf 00:06:31.036 18:04:49 -- dd/common.sh@31 -- # xtrace_disable 00:06:31.036 18:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.295 [2024-11-18 18:04:49.649105] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.295 [2024-11-18 18:04:49.649189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:06:31.295 { 00:06:31.295 "subsystems": [ 00:06:31.295 { 00:06:31.295 "subsystem": "bdev", 00:06:31.295 "config": [ 00:06:31.295 { 00:06:31.295 "params": { 00:06:31.295 "block_size": 512, 00:06:31.295 "num_blocks": 1048576, 00:06:31.295 "name": "malloc0" 00:06:31.295 }, 00:06:31.295 "method": "bdev_malloc_create" 00:06:31.295 }, 00:06:31.295 { 00:06:31.295 "params": { 00:06:31.295 "block_size": 512, 00:06:31.295 "num_blocks": 1048576, 00:06:31.295 "name": "malloc1" 00:06:31.295 }, 00:06:31.295 "method": "bdev_malloc_create" 00:06:31.295 }, 00:06:31.295 { 00:06:31.295 "method": "bdev_wait_for_examine" 00:06:31.295 } 00:06:31.295 ] 00:06:31.295 } 00:06:31.295 ] 00:06:31.295 } 00:06:31.295 [2024-11-18 18:04:49.786083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.295 [2024-11-18 18:04:49.836084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.675  [2024-11-18T18:04:52.261Z] Copying: 237/512 [MB] (237 MBps) [2024-11-18T18:04:52.261Z] Copying: 480/512 [MB] (243 MBps) [2024-11-18T18:04:52.525Z] Copying: 512/512 [MB] (average 241 MBps) 00:06:33.921 00:06:33.921 ************************************ 00:06:33.921 END TEST dd_malloc_copy 00:06:33.921 ************************************ 00:06:33.921 00:06:33.921 real 0m5.900s 00:06:33.921 user 0m5.273s 00:06:33.921 sys 0m0.481s 00:06:33.921 18:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.921 18:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.181 ************************************ 00:06:34.181 END TEST spdk_dd_malloc 00:06:34.181 ************************************ 00:06:34.181 00:06:34.181 real 0m6.136s 00:06:34.181 user 0m5.389s 00:06:34.181 sys 0m0.600s 00:06:34.181 18:04:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.181 18:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.181 18:04:52 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:34.181 18:04:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:34.181 18:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.181 18:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.181 ************************************ 00:06:34.181 START TEST spdk_dd_bdev_to_bdev 00:06:34.181 ************************************ 00:06:34.181 18:04:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:06:34.181 * Looking for test storage... 00:06:34.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:34.181 18:04:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:34.181 18:04:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:34.181 18:04:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:34.181 18:04:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:34.181 18:04:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:34.181 18:04:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:34.181 18:04:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:34.181 18:04:52 -- scripts/common.sh@335 -- # IFS=.-: 00:06:34.181 18:04:52 -- scripts/common.sh@335 -- # read -ra ver1 00:06:34.181 18:04:52 -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.181 18:04:52 -- scripts/common.sh@336 -- # read -ra ver2 00:06:34.181 18:04:52 -- scripts/common.sh@337 -- # local 'op=<' 00:06:34.181 18:04:52 -- scripts/common.sh@339 -- # ver1_l=2 00:06:34.181 18:04:52 -- scripts/common.sh@340 -- # ver2_l=1 00:06:34.181 18:04:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:34.181 18:04:52 -- scripts/common.sh@343 -- # case "$op" in 00:06:34.182 18:04:52 -- scripts/common.sh@344 -- # : 1 00:06:34.182 18:04:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:34.182 18:04:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.182 18:04:52 -- scripts/common.sh@364 -- # decimal 1 00:06:34.182 18:04:52 -- scripts/common.sh@352 -- # local d=1 00:06:34.182 18:04:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.182 18:04:52 -- scripts/common.sh@354 -- # echo 1 00:06:34.182 18:04:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:34.182 18:04:52 -- scripts/common.sh@365 -- # decimal 2 00:06:34.182 18:04:52 -- scripts/common.sh@352 -- # local d=2 00:06:34.182 18:04:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.182 18:04:52 -- scripts/common.sh@354 -- # echo 2 00:06:34.182 18:04:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:34.182 18:04:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:34.182 18:04:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:34.182 18:04:52 -- scripts/common.sh@367 -- # return 0 00:06:34.182 18:04:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.182 18:04:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.182 --rc genhtml_branch_coverage=1 00:06:34.182 --rc genhtml_function_coverage=1 00:06:34.182 --rc genhtml_legend=1 00:06:34.182 --rc geninfo_all_blocks=1 00:06:34.182 --rc geninfo_unexecuted_blocks=1 00:06:34.182 00:06:34.182 ' 00:06:34.182 18:04:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.182 --rc genhtml_branch_coverage=1 00:06:34.182 --rc genhtml_function_coverage=1 00:06:34.182 --rc genhtml_legend=1 00:06:34.182 --rc geninfo_all_blocks=1 00:06:34.182 --rc geninfo_unexecuted_blocks=1 00:06:34.182 00:06:34.182 ' 00:06:34.182 18:04:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.182 --rc genhtml_branch_coverage=1 00:06:34.182 --rc genhtml_function_coverage=1 00:06:34.182 --rc genhtml_legend=1 00:06:34.182 --rc geninfo_all_blocks=1 00:06:34.182 --rc geninfo_unexecuted_blocks=1 00:06:34.182 00:06:34.182 ' 00:06:34.182 18:04:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.182 --rc genhtml_branch_coverage=1 00:06:34.182 --rc genhtml_function_coverage=1 00:06:34.182 --rc genhtml_legend=1 00:06:34.182 --rc geninfo_all_blocks=1 00:06:34.182 --rc geninfo_unexecuted_blocks=1 00:06:34.182 00:06:34.182 ' 00:06:34.182 18:04:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:34.182 18:04:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.182 18:04:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.182 18:04:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.182 18:04:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.182 18:04:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.182 18:04:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.182 18:04:52 -- paths/export.sh@5 -- # export PATH 00:06:34.182 18:04:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:34.182 18:04:52 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:34.182 18:04:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:34.182 18:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.182 18:04:52 -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 ************************************ 00:06:34.442 START TEST dd_inflate_file 00:06:34.442 ************************************ 00:06:34.442 18:04:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:34.442 [2024-11-18 18:04:52.837323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.442 [2024-11-18 18:04:52.837579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58789 ] 00:06:34.442 [2024-11-18 18:04:52.973457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.442 [2024-11-18 18:04:53.020482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.702  [2024-11-18T18:04:53.306Z] Copying: 64/64 [MB] (average 2461 MBps) 00:06:34.702 00:06:34.702 00:06:34.702 real 0m0.470s 00:06:34.702 user 0m0.246s 00:06:34.702 sys 0m0.104s 00:06:34.702 ************************************ 00:06:34.702 END TEST dd_inflate_file 00:06:34.702 ************************************ 00:06:34.702 18:04:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.702 18:04:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.961 18:04:53 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:34.961 18:04:53 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:34.961 18:04:53 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:34.961 18:04:53 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:34.961 18:04:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:34.961 18:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.961 18:04:53 -- dd/common.sh@31 -- # xtrace_disable 00:06:34.961 18:04:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.961 18:04:53 -- common/autotest_common.sh@10 -- # set +x 00:06:34.961 ************************************ 00:06:34.961 START TEST dd_copy_to_out_bdev 00:06:34.961 ************************************ 00:06:34.961 18:04:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:34.961 { 00:06:34.961 "subsystems": [ 00:06:34.961 { 00:06:34.961 "subsystem": "bdev", 00:06:34.961 "config": [ 00:06:34.961 { 00:06:34.961 "params": { 00:06:34.961 "trtype": "pcie", 00:06:34.961 "traddr": "0000:00:06.0", 00:06:34.961 "name": "Nvme0" 00:06:34.961 }, 00:06:34.961 "method": "bdev_nvme_attach_controller" 00:06:34.961 }, 00:06:34.961 { 00:06:34.961 "params": { 00:06:34.961 "trtype": "pcie", 00:06:34.961 "traddr": "0000:00:07.0", 00:06:34.961 "name": "Nvme1" 00:06:34.961 }, 00:06:34.961 "method": "bdev_nvme_attach_controller" 00:06:34.961 }, 00:06:34.961 { 00:06:34.961 "method": "bdev_wait_for_examine" 00:06:34.961 } 00:06:34.961 ] 00:06:34.961 } 00:06:34.961 ] 00:06:34.961 } 00:06:34.961 [2024-11-18 18:04:53.371670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.961 [2024-11-18 18:04:53.371760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58815 ] 00:06:34.961 [2024-11-18 18:04:53.509595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.961 [2024-11-18 18:04:53.557310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.342  [2024-11-18T18:04:55.205Z] Copying: 48/64 [MB] (48 MBps) [2024-11-18T18:04:55.465Z] Copying: 64/64 [MB] (average 48 MBps) 00:06:36.861 00:06:36.861 ************************************ 00:06:36.861 END TEST dd_copy_to_out_bdev 00:06:36.861 ************************************ 00:06:36.861 00:06:36.861 real 0m1.924s 00:06:36.861 user 0m1.707s 00:06:36.861 sys 0m0.152s 00:06:36.861 18:04:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.861 18:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:36.861 18:04:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.861 18:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.861 18:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 ************************************ 00:06:36.861 START TEST dd_offset_magic 00:06:36.861 ************************************ 00:06:36.861 18:04:55 -- common/autotest_common.sh@1114 -- # offset_magic 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:36.861 18:04:55 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:36.861 18:04:55 -- dd/common.sh@31 -- # xtrace_disable 00:06:36.861 18:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:36.861 [2024-11-18 18:04:55.346334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.861 [2024-11-18 18:04:55.346431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58859 ] 00:06:36.861 { 00:06:36.861 "subsystems": [ 00:06:36.861 { 00:06:36.861 "subsystem": "bdev", 00:06:36.861 "config": [ 00:06:36.861 { 00:06:36.861 "params": { 00:06:36.861 "trtype": "pcie", 00:06:36.861 "traddr": "0000:00:06.0", 00:06:36.861 "name": "Nvme0" 00:06:36.861 }, 00:06:36.861 "method": "bdev_nvme_attach_controller" 00:06:36.861 }, 00:06:36.861 { 00:06:36.861 "params": { 00:06:36.861 "trtype": "pcie", 00:06:36.861 "traddr": "0000:00:07.0", 00:06:36.861 "name": "Nvme1" 00:06:36.861 }, 00:06:36.861 "method": "bdev_nvme_attach_controller" 00:06:36.861 }, 00:06:36.861 { 00:06:36.861 "method": "bdev_wait_for_examine" 00:06:36.861 } 00:06:36.861 ] 00:06:36.861 } 00:06:36.861 ] 00:06:36.861 } 00:06:37.121 [2024-11-18 18:04:55.483902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.121 [2024-11-18 18:04:55.536172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.380  [2024-11-18T18:04:55.984Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:37.380 00:06:37.380 18:04:55 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:37.380 18:04:55 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:37.380 18:04:55 -- dd/common.sh@31 -- # xtrace_disable 00:06:37.380 18:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.639 [2024-11-18 18:04:56.026237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.639 [2024-11-18 18:04:56.026329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58879 ] 00:06:37.639 { 00:06:37.639 "subsystems": [ 00:06:37.639 { 00:06:37.639 "subsystem": "bdev", 00:06:37.639 "config": [ 00:06:37.639 { 00:06:37.639 "params": { 00:06:37.639 "trtype": "pcie", 00:06:37.639 "traddr": "0000:00:06.0", 00:06:37.639 "name": "Nvme0" 00:06:37.639 }, 00:06:37.639 "method": "bdev_nvme_attach_controller" 00:06:37.639 }, 00:06:37.639 { 00:06:37.639 "params": { 00:06:37.639 "trtype": "pcie", 00:06:37.639 "traddr": "0000:00:07.0", 00:06:37.639 "name": "Nvme1" 00:06:37.639 }, 00:06:37.639 "method": "bdev_nvme_attach_controller" 00:06:37.639 }, 00:06:37.639 { 00:06:37.639 "method": "bdev_wait_for_examine" 00:06:37.639 } 00:06:37.639 ] 00:06:37.639 } 00:06:37.639 ] 00:06:37.639 } 00:06:37.639 [2024-11-18 18:04:56.162416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.639 [2024-11-18 18:04:56.208996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.899  [2024-11-18T18:04:56.762Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:38.158 00:06:38.158 18:04:56 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:38.158 18:04:56 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:38.158 18:04:56 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:38.158 18:04:56 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:38.158 18:04:56 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:38.158 18:04:56 -- dd/common.sh@31 -- # xtrace_disable 00:06:38.158 18:04:56 -- common/autotest_common.sh@10 -- # set +x 00:06:38.158 { 00:06:38.158 "subsystems": [ 00:06:38.158 { 00:06:38.158 "subsystem": "bdev", 00:06:38.158 "config": [ 00:06:38.158 { 00:06:38.158 "params": { 00:06:38.158 "trtype": "pcie", 00:06:38.158 "traddr": "0000:00:06.0", 00:06:38.158 "name": "Nvme0" 00:06:38.158 }, 00:06:38.158 "method": "bdev_nvme_attach_controller" 00:06:38.158 }, 00:06:38.158 { 00:06:38.158 "params": { 00:06:38.158 "trtype": "pcie", 00:06:38.158 "traddr": "0000:00:07.0", 00:06:38.158 "name": "Nvme1" 00:06:38.158 }, 00:06:38.158 "method": "bdev_nvme_attach_controller" 00:06:38.158 }, 00:06:38.158 { 00:06:38.158 "method": "bdev_wait_for_examine" 00:06:38.158 } 00:06:38.158 ] 00:06:38.158 } 00:06:38.158 ] 00:06:38.158 } 00:06:38.158 [2024-11-18 18:04:56.609056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.158 [2024-11-18 18:04:56.609147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:06:38.158 [2024-11-18 18:04:56.741949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.418 [2024-11-18 18:04:56.790131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.678  [2024-11-18T18:04:57.282Z] Copying: 65/65 [MB] (average 1000 MBps) 00:06:38.678 00:06:38.678 18:04:57 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:38.678 18:04:57 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:38.678 18:04:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:38.678 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.678 [2024-11-18 18:04:57.277861] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.678 [2024-11-18 18:04:57.277946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:06:38.678 { 00:06:38.678 "subsystems": [ 00:06:38.678 { 00:06:38.678 "subsystem": "bdev", 00:06:38.678 "config": [ 00:06:38.678 { 00:06:38.678 "params": { 00:06:38.678 "trtype": "pcie", 00:06:38.678 "traddr": "0000:00:06.0", 00:06:38.678 "name": "Nvme0" 00:06:38.678 }, 00:06:38.678 "method": "bdev_nvme_attach_controller" 00:06:38.678 }, 00:06:38.678 { 00:06:38.678 "params": { 00:06:38.678 "trtype": "pcie", 00:06:38.678 "traddr": "0000:00:07.0", 00:06:38.678 "name": "Nvme1" 00:06:38.678 }, 00:06:38.678 "method": "bdev_nvme_attach_controller" 00:06:38.678 }, 00:06:38.678 { 00:06:38.678 "method": "bdev_wait_for_examine" 00:06:38.678 } 00:06:38.678 ] 00:06:38.678 } 00:06:38.678 ] 00:06:38.678 } 00:06:38.937 [2024-11-18 18:04:57.413067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.937 [2024-11-18 18:04:57.459280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.196  [2024-11-18T18:04:57.800Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:39.196 00:06:39.455 18:04:57 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:39.455 18:04:57 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:39.455 00:06:39.455 real 0m2.504s 00:06:39.455 user 0m1.873s 00:06:39.455 sys 0m0.422s 00:06:39.455 18:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.455 ************************************ 00:06:39.455 END TEST dd_offset_magic 00:06:39.455 ************************************ 00:06:39.455 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.455 18:04:57 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:39.455 18:04:57 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:39.455 18:04:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.455 18:04:57 -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.455 18:04:57 -- dd/common.sh@12 -- # local size=4194330 00:06:39.455 18:04:57 -- dd/common.sh@14 -- # local bs=1048576 00:06:39.455 18:04:57 -- dd/common.sh@15 -- # local count=5 00:06:39.455 18:04:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:39.455 18:04:57 -- dd/common.sh@18 -- # gen_conf 00:06:39.455 18:04:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.455 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:39.455 [2024-11-18 18:04:57.894933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.455 [2024-11-18 18:04:57.895017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:06:39.455 { 00:06:39.455 "subsystems": [ 00:06:39.455 { 00:06:39.455 "subsystem": "bdev", 00:06:39.455 "config": [ 00:06:39.455 { 00:06:39.455 "params": { 00:06:39.455 "trtype": "pcie", 00:06:39.455 "traddr": "0000:00:06.0", 00:06:39.455 "name": "Nvme0" 00:06:39.455 }, 00:06:39.455 "method": "bdev_nvme_attach_controller" 00:06:39.455 }, 00:06:39.455 { 00:06:39.455 "params": { 00:06:39.455 "trtype": "pcie", 00:06:39.455 "traddr": "0000:00:07.0", 00:06:39.455 "name": "Nvme1" 00:06:39.455 }, 00:06:39.455 "method": "bdev_nvme_attach_controller" 00:06:39.455 }, 00:06:39.455 { 00:06:39.455 "method": "bdev_wait_for_examine" 00:06:39.455 } 00:06:39.455 ] 00:06:39.455 } 00:06:39.455 ] 00:06:39.455 } 00:06:39.455 [2024-11-18 18:04:58.027846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.713 [2024-11-18 18:04:58.077616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.713  [2024-11-18T18:04:58.577Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:06:39.973 00:06:39.973 18:04:58 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:39.973 18:04:58 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:39.973 18:04:58 -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.973 18:04:58 -- dd/common.sh@12 -- # local size=4194330 00:06:39.973 18:04:58 -- dd/common.sh@14 -- # local bs=1048576 00:06:39.973 18:04:58 -- dd/common.sh@15 -- # local count=5 00:06:39.973 18:04:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:39.973 18:04:58 -- dd/common.sh@18 -- # gen_conf 00:06:39.973 18:04:58 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.973 18:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:39.973 [2024-11-18 18:04:58.475155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.973 [2024-11-18 18:04:58.475270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58952 ] 00:06:39.973 { 00:06:39.973 "subsystems": [ 00:06:39.973 { 00:06:39.973 "subsystem": "bdev", 00:06:39.973 "config": [ 00:06:39.973 { 00:06:39.973 "params": { 00:06:39.973 "trtype": "pcie", 00:06:39.973 "traddr": "0000:00:06.0", 00:06:39.973 "name": "Nvme0" 00:06:39.973 }, 00:06:39.973 "method": "bdev_nvme_attach_controller" 00:06:39.973 }, 00:06:39.973 { 00:06:39.973 "params": { 00:06:39.973 "trtype": "pcie", 00:06:39.973 "traddr": "0000:00:07.0", 00:06:39.973 "name": "Nvme1" 00:06:39.973 }, 00:06:39.973 "method": "bdev_nvme_attach_controller" 00:06:39.973 }, 00:06:39.973 { 00:06:39.973 "method": "bdev_wait_for_examine" 00:06:39.973 } 00:06:39.973 ] 00:06:39.973 } 00:06:39.973 ] 00:06:39.973 } 00:06:40.232 [2024-11-18 18:04:58.613952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.232 [2024-11-18 18:04:58.669163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.492  [2024-11-18T18:04:59.096Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:40.492 00:06:40.492 18:04:59 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:40.492 00:06:40.492 real 0m6.441s 00:06:40.492 user 0m4.851s 00:06:40.492 sys 0m1.074s 00:06:40.492 18:04:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.492 18:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.492 ************************************ 00:06:40.492 END TEST spdk_dd_bdev_to_bdev 00:06:40.492 ************************************ 00:06:40.492 18:04:59 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:40.492 18:04:59 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:40.492 18:04:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.492 18:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.492 18:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 ************************************ 00:06:40.757 START TEST spdk_dd_uring 00:06:40.757 ************************************ 00:06:40.757 18:04:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:40.757 * Looking for test storage... 00:06:40.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:40.757 18:04:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:40.757 18:04:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:40.757 18:04:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.757 18:04:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.757 18:04:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.757 18:04:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.757 18:04:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.757 18:04:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.757 18:04:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.757 18:04:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.757 18:04:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.757 18:04:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.757 18:04:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.757 18:04:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.757 18:04:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.757 18:04:59 -- scripts/common.sh@344 -- # : 1 00:06:40.757 18:04:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.757 18:04:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.757 18:04:59 -- scripts/common.sh@364 -- # decimal 1 00:06:40.757 18:04:59 -- scripts/common.sh@352 -- # local d=1 00:06:40.757 18:04:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.757 18:04:59 -- scripts/common.sh@354 -- # echo 1 00:06:40.757 18:04:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.757 18:04:59 -- scripts/common.sh@365 -- # decimal 2 00:06:40.757 18:04:59 -- scripts/common.sh@352 -- # local d=2 00:06:40.757 18:04:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.757 18:04:59 -- scripts/common.sh@354 -- # echo 2 00:06:40.757 18:04:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.757 18:04:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.757 18:04:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.757 18:04:59 -- scripts/common.sh@367 -- # return 0 00:06:40.757 18:04:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.757 --rc genhtml_branch_coverage=1 00:06:40.757 --rc genhtml_function_coverage=1 00:06:40.757 --rc genhtml_legend=1 00:06:40.757 --rc geninfo_all_blocks=1 00:06:40.757 --rc geninfo_unexecuted_blocks=1 00:06:40.757 00:06:40.757 ' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.757 --rc genhtml_branch_coverage=1 00:06:40.757 --rc genhtml_function_coverage=1 00:06:40.757 --rc genhtml_legend=1 00:06:40.757 --rc geninfo_all_blocks=1 00:06:40.757 --rc geninfo_unexecuted_blocks=1 00:06:40.757 00:06:40.757 ' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.757 --rc genhtml_branch_coverage=1 00:06:40.757 --rc genhtml_function_coverage=1 00:06:40.757 --rc genhtml_legend=1 00:06:40.757 --rc geninfo_all_blocks=1 00:06:40.757 --rc geninfo_unexecuted_blocks=1 00:06:40.757 00:06:40.757 ' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.757 --rc genhtml_branch_coverage=1 00:06:40.757 --rc genhtml_function_coverage=1 00:06:40.757 --rc genhtml_legend=1 00:06:40.757 --rc geninfo_all_blocks=1 00:06:40.757 --rc geninfo_unexecuted_blocks=1 00:06:40.757 00:06:40.757 ' 00:06:40.757 18:04:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.757 18:04:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.757 18:04:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.757 18:04:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.757 18:04:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.757 18:04:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.757 18:04:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.757 18:04:59 -- paths/export.sh@5 -- # export PATH 00:06:40.757 18:04:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.757 18:04:59 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:40.757 18:04:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.757 18:04:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.757 18:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 ************************************ 00:06:40.757 START TEST dd_uring_copy 00:06:40.757 ************************************ 00:06:40.757 18:04:59 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:06:40.757 18:04:59 -- dd/uring.sh@15 -- # local zram_dev_id 00:06:40.757 18:04:59 -- dd/uring.sh@16 -- # local magic 00:06:40.757 18:04:59 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:40.757 18:04:59 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:40.757 18:04:59 -- dd/uring.sh@19 -- # local verify_magic 00:06:40.757 18:04:59 -- dd/uring.sh@21 -- # init_zram 00:06:40.757 18:04:59 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:40.757 18:04:59 -- dd/common.sh@164 -- # return 00:06:40.757 18:04:59 -- dd/uring.sh@22 -- # create_zram_dev 00:06:40.757 18:04:59 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:40.757 18:04:59 -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:40.757 18:04:59 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:40.757 18:04:59 -- dd/common.sh@181 -- # local id=1 00:06:40.757 18:04:59 -- dd/common.sh@182 -- # local size=512M 00:06:40.757 18:04:59 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:40.757 18:04:59 -- dd/common.sh@186 -- # echo 512M 00:06:40.757 18:04:59 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:40.757 18:04:59 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:40.757 18:04:59 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:40.758 18:04:59 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:40.758 18:04:59 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:40.758 18:04:59 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:40.758 18:04:59 -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:40.758 18:04:59 -- dd/common.sh@98 -- # xtrace_disable 00:06:40.758 18:04:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.758 18:04:59 -- dd/uring.sh@41 -- # magic=e5lr0co35xqsx1zkbtryx8ujtl4iecpqderk1nz7altdifxxy4df8bxeiwafdji4010wkhtn5uus3s7hvxv4jbfvwk5bby134dwo3i6zdj00gsyxrbnkzrpi8j89mk9rjqdecpm0i48t8j559ojiouj8b5xom54wgaf1wt70rbwpob8g6tp3o7cijos2x09hibqy0ei4xy1aw9a8uw3u9tgqk29d9xgcy1x3rf331vsy9e449ennsg8caxqux9l88x8kk6tpwr326j3ney8vs2hqc3jtiqaixj5fr7sf4kxw71h7jci077aeoj9drvgap2e995lf13soixhnt5vw1wf7brm1fcz219z4hgnh7ud63lsjzm2jtzfhzvsb7439b339hu152hi3g1v6vbpuc36x3xlnoayx1ibp64q4y4vgf9ibum0451p740az81bvsrmgfhxrp18hd27z8ul0lul1q1828f2qvym5t4pqsutgh81ptgcsvmady9s8nqn7ttk624fe6q1gkbkkoqucyrnfjd8yybfjakil6nfnvvnea0wledosqby5h3w2y73cj0cinwp49702m3y1mxe08164fix6auun2wwmhzcgql2vbgq3rbeo6antbzzic54ddeom0jcg16k3hrioafwj65iucpn8l1llyj6761kc8l0bjgyzve0iji2s2im6h84gqrop8q0olhshptq26hg081hzsefgmene7kcembzi2u1n0zlp5ujqn56pwq4u043fzzlxwbucdv7mtk95bo2nt2pavzin8atnf93153h0tcf23zkx57rauinyxdk4h83l5crv6zah9055vad8qxtiilgqvyg0eamt5ltln73g4sdhoz3i43plgnmx8y400fhj6x29j6pj9qi2g5bhl63vh0k2n3nzecggnjo94i0a077v3cajhzz3jfw0i4qugf4eh1v5d5e92q86jddun832ogx0rt6x9lcyjhzur2e537fsz9frww792dxu7ycm4wiq 00:06:40.758 18:04:59 -- dd/uring.sh@42 -- # echo e5lr0co35xqsx1zkbtryx8ujtl4iecpqderk1nz7altdifxxy4df8bxeiwafdji4010wkhtn5uus3s7hvxv4jbfvwk5bby134dwo3i6zdj00gsyxrbnkzrpi8j89mk9rjqdecpm0i48t8j559ojiouj8b5xom54wgaf1wt70rbwpob8g6tp3o7cijos2x09hibqy0ei4xy1aw9a8uw3u9tgqk29d9xgcy1x3rf331vsy9e449ennsg8caxqux9l88x8kk6tpwr326j3ney8vs2hqc3jtiqaixj5fr7sf4kxw71h7jci077aeoj9drvgap2e995lf13soixhnt5vw1wf7brm1fcz219z4hgnh7ud63lsjzm2jtzfhzvsb7439b339hu152hi3g1v6vbpuc36x3xlnoayx1ibp64q4y4vgf9ibum0451p740az81bvsrmgfhxrp18hd27z8ul0lul1q1828f2qvym5t4pqsutgh81ptgcsvmady9s8nqn7ttk624fe6q1gkbkkoqucyrnfjd8yybfjakil6nfnvvnea0wledosqby5h3w2y73cj0cinwp49702m3y1mxe08164fix6auun2wwmhzcgql2vbgq3rbeo6antbzzic54ddeom0jcg16k3hrioafwj65iucpn8l1llyj6761kc8l0bjgyzve0iji2s2im6h84gqrop8q0olhshptq26hg081hzsefgmene7kcembzi2u1n0zlp5ujqn56pwq4u043fzzlxwbucdv7mtk95bo2nt2pavzin8atnf93153h0tcf23zkx57rauinyxdk4h83l5crv6zah9055vad8qxtiilgqvyg0eamt5ltln73g4sdhoz3i43plgnmx8y400fhj6x29j6pj9qi2g5bhl63vh0k2n3nzecggnjo94i0a077v3cajhzz3jfw0i4qugf4eh1v5d5e92q86jddun832ogx0rt6x9lcyjhzur2e537fsz9frww792dxu7ycm4wiq 00:06:40.758 18:04:59 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:41.024 [2024-11-18 18:04:59.367036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.024 [2024-11-18 18:04:59.367156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59028 ] 00:06:41.024 [2024-11-18 18:04:59.501814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.024 [2024-11-18 18:04:59.552030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.591  [2024-11-18T18:05:00.455Z] Copying: 511/511 [MB] (average 1796 MBps) 00:06:41.851 00:06:41.851 18:05:00 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:41.851 18:05:00 -- dd/uring.sh@54 -- # gen_conf 00:06:41.851 18:05:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:41.851 18:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:41.851 [2024-11-18 18:05:00.271898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.851 [2024-11-18 18:05:00.272014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:06:41.851 { 00:06:41.851 "subsystems": [ 00:06:41.851 { 00:06:41.851 "subsystem": "bdev", 00:06:41.851 "config": [ 00:06:41.851 { 00:06:41.851 "params": { 00:06:41.851 "block_size": 512, 00:06:41.851 "num_blocks": 1048576, 00:06:41.851 "name": "malloc0" 00:06:41.851 }, 00:06:41.851 "method": "bdev_malloc_create" 00:06:41.851 }, 00:06:41.851 { 00:06:41.851 "params": { 00:06:41.851 "filename": "/dev/zram1", 00:06:41.851 "name": "uring0" 00:06:41.851 }, 00:06:41.851 "method": "bdev_uring_create" 00:06:41.851 }, 00:06:41.851 { 00:06:41.851 "method": "bdev_wait_for_examine" 00:06:41.851 } 00:06:41.851 ] 00:06:41.851 } 00:06:41.851 ] 00:06:41.851 } 00:06:41.851 [2024-11-18 18:05:00.408662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.111 [2024-11-18 18:05:00.459624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.047  [2024-11-18T18:05:03.028Z] Copying: 214/512 [MB] (214 MBps) [2024-11-18T18:05:03.028Z] Copying: 427/512 [MB] (213 MBps) [2024-11-18T18:05:03.287Z] Copying: 512/512 [MB] (average 213 MBps) 00:06:44.683 00:06:44.683 18:05:03 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:44.683 18:05:03 -- dd/uring.sh@60 -- # gen_conf 00:06:44.683 18:05:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.683 18:05:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.943 [2024-11-18 18:05:03.314574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.943 [2024-11-18 18:05:03.314687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:06:44.943 { 00:06:44.943 "subsystems": [ 00:06:44.943 { 00:06:44.943 "subsystem": "bdev", 00:06:44.943 "config": [ 00:06:44.943 { 00:06:44.943 "params": { 00:06:44.943 "block_size": 512, 00:06:44.943 "num_blocks": 1048576, 00:06:44.943 "name": "malloc0" 00:06:44.943 }, 00:06:44.943 "method": "bdev_malloc_create" 00:06:44.943 }, 00:06:44.943 { 00:06:44.943 "params": { 00:06:44.943 "filename": "/dev/zram1", 00:06:44.943 "name": "uring0" 00:06:44.943 }, 00:06:44.943 "method": "bdev_uring_create" 00:06:44.943 }, 00:06:44.943 { 00:06:44.943 "method": "bdev_wait_for_examine" 00:06:44.943 } 00:06:44.943 ] 00:06:44.943 } 00:06:44.943 ] 00:06:44.943 } 00:06:44.943 [2024-11-18 18:05:03.449529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.943 [2024-11-18 18:05:03.495771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.321  [2024-11-18T18:05:05.861Z] Copying: 144/512 [MB] (144 MBps) [2024-11-18T18:05:06.798Z] Copying: 274/512 [MB] (129 MBps) [2024-11-18T18:05:07.367Z] Copying: 436/512 [MB] (161 MBps) [2024-11-18T18:05:07.626Z] Copying: 512/512 [MB] (average 139 MBps) 00:06:49.022 00:06:49.022 18:05:07 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:49.022 18:05:07 -- dd/uring.sh@66 -- # [[ e5lr0co35xqsx1zkbtryx8ujtl4iecpqderk1nz7altdifxxy4df8bxeiwafdji4010wkhtn5uus3s7hvxv4jbfvwk5bby134dwo3i6zdj00gsyxrbnkzrpi8j89mk9rjqdecpm0i48t8j559ojiouj8b5xom54wgaf1wt70rbwpob8g6tp3o7cijos2x09hibqy0ei4xy1aw9a8uw3u9tgqk29d9xgcy1x3rf331vsy9e449ennsg8caxqux9l88x8kk6tpwr326j3ney8vs2hqc3jtiqaixj5fr7sf4kxw71h7jci077aeoj9drvgap2e995lf13soixhnt5vw1wf7brm1fcz219z4hgnh7ud63lsjzm2jtzfhzvsb7439b339hu152hi3g1v6vbpuc36x3xlnoayx1ibp64q4y4vgf9ibum0451p740az81bvsrmgfhxrp18hd27z8ul0lul1q1828f2qvym5t4pqsutgh81ptgcsvmady9s8nqn7ttk624fe6q1gkbkkoqucyrnfjd8yybfjakil6nfnvvnea0wledosqby5h3w2y73cj0cinwp49702m3y1mxe08164fix6auun2wwmhzcgql2vbgq3rbeo6antbzzic54ddeom0jcg16k3hrioafwj65iucpn8l1llyj6761kc8l0bjgyzve0iji2s2im6h84gqrop8q0olhshptq26hg081hzsefgmene7kcembzi2u1n0zlp5ujqn56pwq4u043fzzlxwbucdv7mtk95bo2nt2pavzin8atnf93153h0tcf23zkx57rauinyxdk4h83l5crv6zah9055vad8qxtiilgqvyg0eamt5ltln73g4sdhoz3i43plgnmx8y400fhj6x29j6pj9qi2g5bhl63vh0k2n3nzecggnjo94i0a077v3cajhzz3jfw0i4qugf4eh1v5d5e92q86jddun832ogx0rt6x9lcyjhzur2e537fsz9frww792dxu7ycm4wiq == \e\5\l\r\0\c\o\3\5\x\q\s\x\1\z\k\b\t\r\y\x\8\u\j\t\l\4\i\e\c\p\q\d\e\r\k\1\n\z\7\a\l\t\d\i\f\x\x\y\4\d\f\8\b\x\e\i\w\a\f\d\j\i\4\0\1\0\w\k\h\t\n\5\u\u\s\3\s\7\h\v\x\v\4\j\b\f\v\w\k\5\b\b\y\1\3\4\d\w\o\3\i\6\z\d\j\0\0\g\s\y\x\r\b\n\k\z\r\p\i\8\j\8\9\m\k\9\r\j\q\d\e\c\p\m\0\i\4\8\t\8\j\5\5\9\o\j\i\o\u\j\8\b\5\x\o\m\5\4\w\g\a\f\1\w\t\7\0\r\b\w\p\o\b\8\g\6\t\p\3\o\7\c\i\j\o\s\2\x\0\9\h\i\b\q\y\0\e\i\4\x\y\1\a\w\9\a\8\u\w\3\u\9\t\g\q\k\2\9\d\9\x\g\c\y\1\x\3\r\f\3\3\1\v\s\y\9\e\4\4\9\e\n\n\s\g\8\c\a\x\q\u\x\9\l\8\8\x\8\k\k\6\t\p\w\r\3\2\6\j\3\n\e\y\8\v\s\2\h\q\c\3\j\t\i\q\a\i\x\j\5\f\r\7\s\f\4\k\x\w\7\1\h\7\j\c\i\0\7\7\a\e\o\j\9\d\r\v\g\a\p\2\e\9\9\5\l\f\1\3\s\o\i\x\h\n\t\5\v\w\1\w\f\7\b\r\m\1\f\c\z\2\1\9\z\4\h\g\n\h\7\u\d\6\3\l\s\j\z\m\2\j\t\z\f\h\z\v\s\b\7\4\3\9\b\3\3\9\h\u\1\5\2\h\i\3\g\1\v\6\v\b\p\u\c\3\6\x\3\x\l\n\o\a\y\x\1\i\b\p\6\4\q\4\y\4\v\g\f\9\i\b\u\m\0\4\5\1\p\7\4\0\a\z\8\1\b\v\s\r\m\g\f\h\x\r\p\1\8\h\d\2\7\z\8\u\l\0\l\u\l\1\q\1\8\2\8\f\2\q\v\y\m\5\t\4\p\q\s\u\t\g\h\8\1\p\t\g\c\s\v\m\a\d\y\9\s\8\n\q\n\7\t\t\k\6\2\4\f\e\6\q\1\g\k\b\k\k\o\q\u\c\y\r\n\f\j\d\8\y\y\b\f\j\a\k\i\l\6\n\f\n\v\v\n\e\a\0\w\l\e\d\o\s\q\b\y\5\h\3\w\2\y\7\3\c\j\0\c\i\n\w\p\4\9\7\0\2\m\3\y\1\m\x\e\0\8\1\6\4\f\i\x\6\a\u\u\n\2\w\w\m\h\z\c\g\q\l\2\v\b\g\q\3\r\b\e\o\6\a\n\t\b\z\z\i\c\5\4\d\d\e\o\m\0\j\c\g\1\6\k\3\h\r\i\o\a\f\w\j\6\5\i\u\c\p\n\8\l\1\l\l\y\j\6\7\6\1\k\c\8\l\0\b\j\g\y\z\v\e\0\i\j\i\2\s\2\i\m\6\h\8\4\g\q\r\o\p\8\q\0\o\l\h\s\h\p\t\q\2\6\h\g\0\8\1\h\z\s\e\f\g\m\e\n\e\7\k\c\e\m\b\z\i\2\u\1\n\0\z\l\p\5\u\j\q\n\5\6\p\w\q\4\u\0\4\3\f\z\z\l\x\w\b\u\c\d\v\7\m\t\k\9\5\b\o\2\n\t\2\p\a\v\z\i\n\8\a\t\n\f\9\3\1\5\3\h\0\t\c\f\2\3\z\k\x\5\7\r\a\u\i\n\y\x\d\k\4\h\8\3\l\5\c\r\v\6\z\a\h\9\0\5\5\v\a\d\8\q\x\t\i\i\l\g\q\v\y\g\0\e\a\m\t\5\l\t\l\n\7\3\g\4\s\d\h\o\z\3\i\4\3\p\l\g\n\m\x\8\y\4\0\0\f\h\j\6\x\2\9\j\6\p\j\9\q\i\2\g\5\b\h\l\6\3\v\h\0\k\2\n\3\n\z\e\c\g\g\n\j\o\9\4\i\0\a\0\7\7\v\3\c\a\j\h\z\z\3\j\f\w\0\i\4\q\u\g\f\4\e\h\1\v\5\d\5\e\9\2\q\8\6\j\d\d\u\n\8\3\2\o\g\x\0\r\t\6\x\9\l\c\y\j\h\z\u\r\2\e\5\3\7\f\s\z\9\f\r\w\w\7\9\2\d\x\u\7\y\c\m\4\w\i\q ]] 00:06:49.022 18:05:07 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:49.022 18:05:07 -- dd/uring.sh@69 -- # [[ e5lr0co35xqsx1zkbtryx8ujtl4iecpqderk1nz7altdifxxy4df8bxeiwafdji4010wkhtn5uus3s7hvxv4jbfvwk5bby134dwo3i6zdj00gsyxrbnkzrpi8j89mk9rjqdecpm0i48t8j559ojiouj8b5xom54wgaf1wt70rbwpob8g6tp3o7cijos2x09hibqy0ei4xy1aw9a8uw3u9tgqk29d9xgcy1x3rf331vsy9e449ennsg8caxqux9l88x8kk6tpwr326j3ney8vs2hqc3jtiqaixj5fr7sf4kxw71h7jci077aeoj9drvgap2e995lf13soixhnt5vw1wf7brm1fcz219z4hgnh7ud63lsjzm2jtzfhzvsb7439b339hu152hi3g1v6vbpuc36x3xlnoayx1ibp64q4y4vgf9ibum0451p740az81bvsrmgfhxrp18hd27z8ul0lul1q1828f2qvym5t4pqsutgh81ptgcsvmady9s8nqn7ttk624fe6q1gkbkkoqucyrnfjd8yybfjakil6nfnvvnea0wledosqby5h3w2y73cj0cinwp49702m3y1mxe08164fix6auun2wwmhzcgql2vbgq3rbeo6antbzzic54ddeom0jcg16k3hrioafwj65iucpn8l1llyj6761kc8l0bjgyzve0iji2s2im6h84gqrop8q0olhshptq26hg081hzsefgmene7kcembzi2u1n0zlp5ujqn56pwq4u043fzzlxwbucdv7mtk95bo2nt2pavzin8atnf93153h0tcf23zkx57rauinyxdk4h83l5crv6zah9055vad8qxtiilgqvyg0eamt5ltln73g4sdhoz3i43plgnmx8y400fhj6x29j6pj9qi2g5bhl63vh0k2n3nzecggnjo94i0a077v3cajhzz3jfw0i4qugf4eh1v5d5e92q86jddun832ogx0rt6x9lcyjhzur2e537fsz9frww792dxu7ycm4wiq == \e\5\l\r\0\c\o\3\5\x\q\s\x\1\z\k\b\t\r\y\x\8\u\j\t\l\4\i\e\c\p\q\d\e\r\k\1\n\z\7\a\l\t\d\i\f\x\x\y\4\d\f\8\b\x\e\i\w\a\f\d\j\i\4\0\1\0\w\k\h\t\n\5\u\u\s\3\s\7\h\v\x\v\4\j\b\f\v\w\k\5\b\b\y\1\3\4\d\w\o\3\i\6\z\d\j\0\0\g\s\y\x\r\b\n\k\z\r\p\i\8\j\8\9\m\k\9\r\j\q\d\e\c\p\m\0\i\4\8\t\8\j\5\5\9\o\j\i\o\u\j\8\b\5\x\o\m\5\4\w\g\a\f\1\w\t\7\0\r\b\w\p\o\b\8\g\6\t\p\3\o\7\c\i\j\o\s\2\x\0\9\h\i\b\q\y\0\e\i\4\x\y\1\a\w\9\a\8\u\w\3\u\9\t\g\q\k\2\9\d\9\x\g\c\y\1\x\3\r\f\3\3\1\v\s\y\9\e\4\4\9\e\n\n\s\g\8\c\a\x\q\u\x\9\l\8\8\x\8\k\k\6\t\p\w\r\3\2\6\j\3\n\e\y\8\v\s\2\h\q\c\3\j\t\i\q\a\i\x\j\5\f\r\7\s\f\4\k\x\w\7\1\h\7\j\c\i\0\7\7\a\e\o\j\9\d\r\v\g\a\p\2\e\9\9\5\l\f\1\3\s\o\i\x\h\n\t\5\v\w\1\w\f\7\b\r\m\1\f\c\z\2\1\9\z\4\h\g\n\h\7\u\d\6\3\l\s\j\z\m\2\j\t\z\f\h\z\v\s\b\7\4\3\9\b\3\3\9\h\u\1\5\2\h\i\3\g\1\v\6\v\b\p\u\c\3\6\x\3\x\l\n\o\a\y\x\1\i\b\p\6\4\q\4\y\4\v\g\f\9\i\b\u\m\0\4\5\1\p\7\4\0\a\z\8\1\b\v\s\r\m\g\f\h\x\r\p\1\8\h\d\2\7\z\8\u\l\0\l\u\l\1\q\1\8\2\8\f\2\q\v\y\m\5\t\4\p\q\s\u\t\g\h\8\1\p\t\g\c\s\v\m\a\d\y\9\s\8\n\q\n\7\t\t\k\6\2\4\f\e\6\q\1\g\k\b\k\k\o\q\u\c\y\r\n\f\j\d\8\y\y\b\f\j\a\k\i\l\6\n\f\n\v\v\n\e\a\0\w\l\e\d\o\s\q\b\y\5\h\3\w\2\y\7\3\c\j\0\c\i\n\w\p\4\9\7\0\2\m\3\y\1\m\x\e\0\8\1\6\4\f\i\x\6\a\u\u\n\2\w\w\m\h\z\c\g\q\l\2\v\b\g\q\3\r\b\e\o\6\a\n\t\b\z\z\i\c\5\4\d\d\e\o\m\0\j\c\g\1\6\k\3\h\r\i\o\a\f\w\j\6\5\i\u\c\p\n\8\l\1\l\l\y\j\6\7\6\1\k\c\8\l\0\b\j\g\y\z\v\e\0\i\j\i\2\s\2\i\m\6\h\8\4\g\q\r\o\p\8\q\0\o\l\h\s\h\p\t\q\2\6\h\g\0\8\1\h\z\s\e\f\g\m\e\n\e\7\k\c\e\m\b\z\i\2\u\1\n\0\z\l\p\5\u\j\q\n\5\6\p\w\q\4\u\0\4\3\f\z\z\l\x\w\b\u\c\d\v\7\m\t\k\9\5\b\o\2\n\t\2\p\a\v\z\i\n\8\a\t\n\f\9\3\1\5\3\h\0\t\c\f\2\3\z\k\x\5\7\r\a\u\i\n\y\x\d\k\4\h\8\3\l\5\c\r\v\6\z\a\h\9\0\5\5\v\a\d\8\q\x\t\i\i\l\g\q\v\y\g\0\e\a\m\t\5\l\t\l\n\7\3\g\4\s\d\h\o\z\3\i\4\3\p\l\g\n\m\x\8\y\4\0\0\f\h\j\6\x\2\9\j\6\p\j\9\q\i\2\g\5\b\h\l\6\3\v\h\0\k\2\n\3\n\z\e\c\g\g\n\j\o\9\4\i\0\a\0\7\7\v\3\c\a\j\h\z\z\3\j\f\w\0\i\4\q\u\g\f\4\e\h\1\v\5\d\5\e\9\2\q\8\6\j\d\d\u\n\8\3\2\o\g\x\0\r\t\6\x\9\l\c\y\j\h\z\u\r\2\e\5\3\7\f\s\z\9\f\r\w\w\7\9\2\d\x\u\7\y\c\m\4\w\i\q ]] 00:06:49.023 18:05:07 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:49.591 18:05:07 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:49.591 18:05:07 -- dd/uring.sh@75 -- # gen_conf 00:06:49.591 18:05:07 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.591 18:05:07 -- common/autotest_common.sh@10 -- # set +x 00:06:49.591 { 00:06:49.591 "subsystems": [ 00:06:49.591 { 00:06:49.591 "subsystem": "bdev", 00:06:49.591 "config": [ 00:06:49.591 { 00:06:49.591 "params": { 00:06:49.591 "block_size": 512, 00:06:49.591 "num_blocks": 1048576, 00:06:49.591 "name": "malloc0" 00:06:49.591 }, 00:06:49.591 "method": "bdev_malloc_create" 00:06:49.591 }, 00:06:49.591 { 00:06:49.591 "params": { 00:06:49.591 "filename": "/dev/zram1", 00:06:49.591 "name": "uring0" 00:06:49.591 }, 00:06:49.591 "method": "bdev_uring_create" 00:06:49.591 }, 00:06:49.591 { 00:06:49.591 "method": "bdev_wait_for_examine" 00:06:49.591 } 00:06:49.591 ] 00:06:49.591 } 00:06:49.591 ] 00:06:49.591 } 00:06:49.591 [2024-11-18 18:05:07.997582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.591 [2024-11-18 18:05:07.997689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:06:49.591 [2024-11-18 18:05:08.134160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.591 [2024-11-18 18:05:08.182740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.970  [2024-11-18T18:05:10.511Z] Copying: 169/512 [MB] (169 MBps) [2024-11-18T18:05:11.451Z] Copying: 339/512 [MB] (169 MBps) [2024-11-18T18:05:11.451Z] Copying: 510/512 [MB] (171 MBps) [2024-11-18T18:05:11.716Z] Copying: 512/512 [MB] (average 170 MBps) 00:06:53.112 00:06:53.112 18:05:11 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:53.112 18:05:11 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:53.112 18:05:11 -- dd/uring.sh@87 -- # : 00:06:53.112 18:05:11 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:53.112 18:05:11 -- dd/uring.sh@87 -- # gen_conf 00:06:53.112 18:05:11 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.112 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:06:53.112 18:05:11 -- dd/uring.sh@87 -- # : 00:06:53.112 [2024-11-18 18:05:11.631022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.112 [2024-11-18 18:05:11.631123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:06:53.112 { 00:06:53.112 "subsystems": [ 00:06:53.112 { 00:06:53.112 "subsystem": "bdev", 00:06:53.112 "config": [ 00:06:53.112 { 00:06:53.112 "params": { 00:06:53.112 "block_size": 512, 00:06:53.112 "num_blocks": 1048576, 00:06:53.112 "name": "malloc0" 00:06:53.112 }, 00:06:53.112 "method": "bdev_malloc_create" 00:06:53.112 }, 00:06:53.112 { 00:06:53.113 "params": { 00:06:53.113 "filename": "/dev/zram1", 00:06:53.113 "name": "uring0" 00:06:53.113 }, 00:06:53.113 "method": "bdev_uring_create" 00:06:53.113 }, 00:06:53.113 { 00:06:53.113 "params": { 00:06:53.113 "name": "uring0" 00:06:53.113 }, 00:06:53.113 "method": "bdev_uring_delete" 00:06:53.113 }, 00:06:53.113 { 00:06:53.113 "method": "bdev_wait_for_examine" 00:06:53.113 } 00:06:53.113 ] 00:06:53.113 } 00:06:53.113 ] 00:06:53.113 } 00:06:53.373 [2024-11-18 18:05:11.767227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.373 [2024-11-18 18:05:11.813612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.373  [2024-11-18T18:05:12.236Z] Copying: 0/0 [B] (average 0 Bps) 00:06:53.632 00:06:53.632 18:05:12 -- dd/uring.sh@94 -- # : 00:06:53.632 18:05:12 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.632 18:05:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:53.632 18:05:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.632 18:05:12 -- dd/uring.sh@94 -- # gen_conf 00:06:53.632 18:05:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.632 18:05:12 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.632 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:06:53.632 18:05:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.632 18:05:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.632 18:05:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.632 18:05:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.632 18:05:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.632 18:05:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.632 18:05:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.632 18:05:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:53.891 [2024-11-18 18:05:12.275990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.891 [2024-11-18 18:05:12.276084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:06:53.891 { 00:06:53.891 "subsystems": [ 00:06:53.891 { 00:06:53.891 "subsystem": "bdev", 00:06:53.891 "config": [ 00:06:53.891 { 00:06:53.891 "params": { 00:06:53.891 "block_size": 512, 00:06:53.891 "num_blocks": 1048576, 00:06:53.891 "name": "malloc0" 00:06:53.891 }, 00:06:53.891 "method": "bdev_malloc_create" 00:06:53.891 }, 00:06:53.891 { 00:06:53.891 "params": { 00:06:53.891 "filename": "/dev/zram1", 00:06:53.891 "name": "uring0" 00:06:53.891 }, 00:06:53.891 "method": "bdev_uring_create" 00:06:53.891 }, 00:06:53.891 { 00:06:53.891 "params": { 00:06:53.891 "name": "uring0" 00:06:53.891 }, 00:06:53.891 "method": "bdev_uring_delete" 00:06:53.891 }, 00:06:53.891 { 00:06:53.891 "method": "bdev_wait_for_examine" 00:06:53.891 } 00:06:53.891 ] 00:06:53.891 } 00:06:53.891 ] 00:06:53.891 } 00:06:53.891 [2024-11-18 18:05:12.413074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.891 [2024-11-18 18:05:12.464454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.150 [2024-11-18 18:05:12.604094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:54.150 [2024-11-18 18:05:12.604156] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:54.150 [2024-11-18 18:05:12.604181] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:06:54.150 [2024-11-18 18:05:12.604190] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.409 [2024-11-18 18:05:12.759689] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:54.409 18:05:12 -- common/autotest_common.sh@653 -- # es=237 00:06:54.409 18:05:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.409 18:05:12 -- common/autotest_common.sh@662 -- # es=109 00:06:54.409 18:05:12 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.409 18:05:12 -- common/autotest_common.sh@670 -- # es=1 00:06:54.409 18:05:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.409 18:05:12 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:54.409 18:05:12 -- dd/common.sh@172 -- # local id=1 00:06:54.409 18:05:12 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:06:54.409 18:05:12 -- dd/common.sh@176 -- # echo 1 00:06:54.409 18:05:12 -- dd/common.sh@177 -- # echo 1 00:06:54.410 18:05:12 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:54.668 00:06:54.668 real 0m13.840s 00:06:54.668 user 0m7.832s 00:06:54.668 sys 0m5.313s 00:06:54.668 18:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.668 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.668 ************************************ 00:06:54.668 END TEST dd_uring_copy 00:06:54.668 ************************************ 00:06:54.668 00:06:54.668 real 0m14.067s 00:06:54.668 user 0m7.962s 00:06:54.668 sys 0m5.418s 00:06:54.668 18:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.668 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.668 ************************************ 00:06:54.668 END TEST spdk_dd_uring 00:06:54.668 ************************************ 00:06:54.668 18:05:13 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:54.668 18:05:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.668 18:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.668 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.669 ************************************ 00:06:54.669 START TEST spdk_dd_sparse 00:06:54.669 ************************************ 00:06:54.669 18:05:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:54.934 * Looking for test storage... 00:06:54.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:54.934 18:05:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:54.934 18:05:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:54.934 18:05:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:54.934 18:05:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:54.934 18:05:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:54.934 18:05:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:54.934 18:05:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:54.934 18:05:13 -- scripts/common.sh@335 -- # IFS=.-: 00:06:54.934 18:05:13 -- scripts/common.sh@335 -- # read -ra ver1 00:06:54.934 18:05:13 -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.934 18:05:13 -- scripts/common.sh@336 -- # read -ra ver2 00:06:54.934 18:05:13 -- scripts/common.sh@337 -- # local 'op=<' 00:06:54.934 18:05:13 -- scripts/common.sh@339 -- # ver1_l=2 00:06:54.934 18:05:13 -- scripts/common.sh@340 -- # ver2_l=1 00:06:54.934 18:05:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:54.934 18:05:13 -- scripts/common.sh@343 -- # case "$op" in 00:06:54.934 18:05:13 -- scripts/common.sh@344 -- # : 1 00:06:54.934 18:05:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:54.934 18:05:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.934 18:05:13 -- scripts/common.sh@364 -- # decimal 1 00:06:54.934 18:05:13 -- scripts/common.sh@352 -- # local d=1 00:06:54.934 18:05:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.934 18:05:13 -- scripts/common.sh@354 -- # echo 1 00:06:54.934 18:05:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:54.934 18:05:13 -- scripts/common.sh@365 -- # decimal 2 00:06:54.934 18:05:13 -- scripts/common.sh@352 -- # local d=2 00:06:54.934 18:05:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.934 18:05:13 -- scripts/common.sh@354 -- # echo 2 00:06:54.934 18:05:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:54.934 18:05:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:54.934 18:05:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:54.934 18:05:13 -- scripts/common.sh@367 -- # return 0 00:06:54.934 18:05:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.934 18:05:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.934 --rc genhtml_branch_coverage=1 00:06:54.934 --rc genhtml_function_coverage=1 00:06:54.934 --rc genhtml_legend=1 00:06:54.934 --rc geninfo_all_blocks=1 00:06:54.934 --rc geninfo_unexecuted_blocks=1 00:06:54.934 00:06:54.934 ' 00:06:54.934 18:05:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.935 --rc genhtml_branch_coverage=1 00:06:54.935 --rc genhtml_function_coverage=1 00:06:54.935 --rc genhtml_legend=1 00:06:54.935 --rc geninfo_all_blocks=1 00:06:54.935 --rc geninfo_unexecuted_blocks=1 00:06:54.935 00:06:54.935 ' 00:06:54.935 18:05:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:54.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.935 --rc genhtml_branch_coverage=1 00:06:54.935 --rc genhtml_function_coverage=1 00:06:54.935 --rc genhtml_legend=1 00:06:54.935 --rc geninfo_all_blocks=1 00:06:54.935 --rc geninfo_unexecuted_blocks=1 00:06:54.935 00:06:54.935 ' 00:06:54.935 18:05:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:54.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.935 --rc genhtml_branch_coverage=1 00:06:54.935 --rc genhtml_function_coverage=1 00:06:54.935 --rc genhtml_legend=1 00:06:54.935 --rc geninfo_all_blocks=1 00:06:54.935 --rc geninfo_unexecuted_blocks=1 00:06:54.935 00:06:54.935 ' 00:06:54.935 18:05:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.935 18:05:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.935 18:05:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.935 18:05:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.935 18:05:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.935 18:05:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.935 18:05:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.935 18:05:13 -- paths/export.sh@5 -- # export PATH 00:06:54.935 18:05:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.935 18:05:13 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:54.935 18:05:13 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:54.935 18:05:13 -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:54.935 18:05:13 -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:54.935 18:05:13 -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:54.935 18:05:13 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:54.935 18:05:13 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:54.935 18:05:13 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:54.935 18:05:13 -- dd/sparse.sh@118 -- # prepare 00:06:54.935 18:05:13 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:54.935 18:05:13 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:54.935 1+0 records in 00:06:54.935 1+0 records out 00:06:54.935 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00694402 s, 604 MB/s 00:06:54.935 18:05:13 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:54.935 1+0 records in 00:06:54.935 1+0 records out 00:06:54.935 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00661018 s, 635 MB/s 00:06:54.935 18:05:13 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:54.935 1+0 records in 00:06:54.935 1+0 records out 00:06:54.935 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0055565 s, 755 MB/s 00:06:54.935 18:05:13 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:54.935 18:05:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.935 18:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.935 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 ************************************ 00:06:54.935 START TEST dd_sparse_file_to_file 00:06:54.935 ************************************ 00:06:54.935 18:05:13 -- common/autotest_common.sh@1114 -- # file_to_file 00:06:54.935 18:05:13 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:54.935 18:05:13 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:54.935 18:05:13 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:54.935 18:05:13 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:54.935 18:05:13 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:54.935 18:05:13 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:54.935 18:05:13 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:54.935 18:05:13 -- dd/sparse.sh@41 -- # gen_conf 00:06:54.935 18:05:13 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.935 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.935 [2024-11-18 18:05:13.481130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.935 [2024-11-18 18:05:13.481988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ] 00:06:54.935 { 00:06:54.935 "subsystems": [ 00:06:54.935 { 00:06:54.935 "subsystem": "bdev", 00:06:54.935 "config": [ 00:06:54.935 { 00:06:54.935 "params": { 00:06:54.935 "block_size": 4096, 00:06:54.935 "filename": "dd_sparse_aio_disk", 00:06:54.935 "name": "dd_aio" 00:06:54.935 }, 00:06:54.935 "method": "bdev_aio_create" 00:06:54.935 }, 00:06:54.935 { 00:06:54.935 "params": { 00:06:54.935 "lvs_name": "dd_lvstore", 00:06:54.935 "bdev_name": "dd_aio" 00:06:54.935 }, 00:06:54.935 "method": "bdev_lvol_create_lvstore" 00:06:54.935 }, 00:06:54.935 { 00:06:54.935 "method": "bdev_wait_for_examine" 00:06:54.935 } 00:06:54.935 ] 00:06:54.935 } 00:06:54.935 ] 00:06:54.935 } 00:06:55.196 [2024-11-18 18:05:13.619002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.196 [2024-11-18 18:05:13.667359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.196  [2024-11-18T18:05:14.059Z] Copying: 12/36 [MB] (average 1714 MBps) 00:06:55.455 00:06:55.455 18:05:13 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:55.455 18:05:13 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:55.455 18:05:13 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:55.455 18:05:13 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:55.455 18:05:13 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:55.455 18:05:13 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:55.455 18:05:13 -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:55.455 18:05:13 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:55.455 18:05:13 -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:55.455 18:05:13 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:55.455 00:06:55.455 real 0m0.552s 00:06:55.455 user 0m0.327s 00:06:55.455 sys 0m0.126s 00:06:55.455 18:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.455 18:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:55.455 ************************************ 00:06:55.455 END TEST dd_sparse_file_to_file 00:06:55.455 ************************************ 00:06:55.455 18:05:14 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:55.455 18:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.455 18:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.455 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:55.455 ************************************ 00:06:55.455 START TEST dd_sparse_file_to_bdev 00:06:55.455 ************************************ 00:06:55.455 18:05:14 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:06:55.455 18:05:14 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:55.455 18:05:14 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:55.455 18:05:14 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:06:55.455 18:05:14 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:55.455 18:05:14 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:55.455 18:05:14 -- dd/sparse.sh@73 -- # gen_conf 00:06:55.455 18:05:14 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.455 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:55.714 [2024-11-18 18:05:14.081168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.714 [2024-11-18 18:05:14.081271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59383 ] 00:06:55.714 { 00:06:55.714 "subsystems": [ 00:06:55.714 { 00:06:55.714 "subsystem": "bdev", 00:06:55.714 "config": [ 00:06:55.714 { 00:06:55.714 "params": { 00:06:55.714 "block_size": 4096, 00:06:55.714 "filename": "dd_sparse_aio_disk", 00:06:55.714 "name": "dd_aio" 00:06:55.714 }, 00:06:55.714 "method": "bdev_aio_create" 00:06:55.714 }, 00:06:55.714 { 00:06:55.714 "params": { 00:06:55.714 "lvs_name": "dd_lvstore", 00:06:55.714 "lvol_name": "dd_lvol", 00:06:55.714 "size": 37748736, 00:06:55.714 "thin_provision": true 00:06:55.714 }, 00:06:55.714 "method": "bdev_lvol_create" 00:06:55.714 }, 00:06:55.714 { 00:06:55.714 "method": "bdev_wait_for_examine" 00:06:55.714 } 00:06:55.714 ] 00:06:55.714 } 00:06:55.714 ] 00:06:55.714 } 00:06:55.714 [2024-11-18 18:05:14.214697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.714 [2024-11-18 18:05:14.263442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.973 [2024-11-18 18:05:14.325959] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:06:55.973  [2024-11-18T18:05:14.577Z] Copying: 12/36 [MB] (average 352 MBps)[2024-11-18 18:05:14.375690] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:06:55.973 00:06:55.973 00:06:55.973 00:06:55.973 real 0m0.540s 00:06:55.973 user 0m0.354s 00:06:55.973 sys 0m0.108s 00:06:55.973 18:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.973 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:55.973 ************************************ 00:06:55.973 END TEST dd_sparse_file_to_bdev 00:06:55.973 ************************************ 00:06:56.232 18:05:14 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:56.232 18:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.232 18:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.232 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.232 ************************************ 00:06:56.232 START TEST dd_sparse_bdev_to_file 00:06:56.232 ************************************ 00:06:56.232 18:05:14 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:06:56.232 18:05:14 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:56.232 18:05:14 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:56.232 18:05:14 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:56.232 18:05:14 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:56.233 18:05:14 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:56.233 18:05:14 -- dd/sparse.sh@91 -- # gen_conf 00:06:56.233 18:05:14 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.233 18:05:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.233 [2024-11-18 18:05:14.677708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.233 [2024-11-18 18:05:14.677811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59409 ] 00:06:56.233 { 00:06:56.233 "subsystems": [ 00:06:56.233 { 00:06:56.233 "subsystem": "bdev", 00:06:56.233 "config": [ 00:06:56.233 { 00:06:56.233 "params": { 00:06:56.233 "block_size": 4096, 00:06:56.233 "filename": "dd_sparse_aio_disk", 00:06:56.233 "name": "dd_aio" 00:06:56.233 }, 00:06:56.233 "method": "bdev_aio_create" 00:06:56.233 }, 00:06:56.233 { 00:06:56.233 "method": "bdev_wait_for_examine" 00:06:56.233 } 00:06:56.233 ] 00:06:56.233 } 00:06:56.233 ] 00:06:56.233 } 00:06:56.233 [2024-11-18 18:05:14.813836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.492 [2024-11-18 18:05:14.862269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.492  [2024-11-18T18:05:15.355Z] Copying: 12/36 [MB] (average 1333 MBps) 00:06:56.751 00:06:56.751 18:05:15 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:56.751 18:05:15 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:56.751 18:05:15 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:56.751 18:05:15 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:56.751 18:05:15 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:56.751 18:05:15 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:56.751 18:05:15 -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:56.751 18:05:15 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:56.751 18:05:15 -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:56.751 18:05:15 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:56.751 00:06:56.751 real 0m0.546s 00:06:56.751 user 0m0.331s 00:06:56.751 sys 0m0.129s 00:06:56.751 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.751 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:56.751 ************************************ 00:06:56.751 END TEST dd_sparse_bdev_to_file 00:06:56.751 ************************************ 00:06:56.751 18:05:15 -- dd/sparse.sh@1 -- # cleanup 00:06:56.751 18:05:15 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:56.751 18:05:15 -- dd/sparse.sh@12 -- # rm file_zero1 00:06:56.751 18:05:15 -- dd/sparse.sh@13 -- # rm file_zero2 00:06:56.751 18:05:15 -- dd/sparse.sh@14 -- # rm file_zero3 00:06:56.751 00:06:56.751 real 0m2.026s 00:06:56.751 user 0m1.179s 00:06:56.751 sys 0m0.579s 00:06:56.751 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.751 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:56.751 ************************************ 00:06:56.751 END TEST spdk_dd_sparse 00:06:56.751 ************************************ 00:06:56.751 18:05:15 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:56.751 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.751 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.751 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:56.751 ************************************ 00:06:56.751 START TEST spdk_dd_negative 00:06:56.751 ************************************ 00:06:56.751 18:05:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:57.022 * Looking for test storage... 00:06:57.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.022 18:05:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:57.023 18:05:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:57.023 18:05:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:57.023 18:05:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:57.023 18:05:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:57.023 18:05:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:57.023 18:05:15 -- scripts/common.sh@335 -- # IFS=.-: 00:06:57.023 18:05:15 -- scripts/common.sh@335 -- # read -ra ver1 00:06:57.023 18:05:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.023 18:05:15 -- scripts/common.sh@336 -- # read -ra ver2 00:06:57.023 18:05:15 -- scripts/common.sh@337 -- # local 'op=<' 00:06:57.023 18:05:15 -- scripts/common.sh@339 -- # ver1_l=2 00:06:57.023 18:05:15 -- scripts/common.sh@340 -- # ver2_l=1 00:06:57.023 18:05:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:57.023 18:05:15 -- scripts/common.sh@343 -- # case "$op" in 00:06:57.023 18:05:15 -- scripts/common.sh@344 -- # : 1 00:06:57.023 18:05:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:57.023 18:05:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.023 18:05:15 -- scripts/common.sh@364 -- # decimal 1 00:06:57.023 18:05:15 -- scripts/common.sh@352 -- # local d=1 00:06:57.023 18:05:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.023 18:05:15 -- scripts/common.sh@354 -- # echo 1 00:06:57.023 18:05:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:57.023 18:05:15 -- scripts/common.sh@365 -- # decimal 2 00:06:57.023 18:05:15 -- scripts/common.sh@352 -- # local d=2 00:06:57.023 18:05:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.023 18:05:15 -- scripts/common.sh@354 -- # echo 2 00:06:57.023 18:05:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:57.023 18:05:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:57.023 18:05:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:57.023 18:05:15 -- scripts/common.sh@367 -- # return 0 00:06:57.023 18:05:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:57.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.023 --rc genhtml_branch_coverage=1 00:06:57.023 --rc genhtml_function_coverage=1 00:06:57.023 --rc genhtml_legend=1 00:06:57.023 --rc geninfo_all_blocks=1 00:06:57.023 --rc geninfo_unexecuted_blocks=1 00:06:57.023 00:06:57.023 ' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:57.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.023 --rc genhtml_branch_coverage=1 00:06:57.023 --rc genhtml_function_coverage=1 00:06:57.023 --rc genhtml_legend=1 00:06:57.023 --rc geninfo_all_blocks=1 00:06:57.023 --rc geninfo_unexecuted_blocks=1 00:06:57.023 00:06:57.023 ' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:57.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.023 --rc genhtml_branch_coverage=1 00:06:57.023 --rc genhtml_function_coverage=1 00:06:57.023 --rc genhtml_legend=1 00:06:57.023 --rc geninfo_all_blocks=1 00:06:57.023 --rc geninfo_unexecuted_blocks=1 00:06:57.023 00:06:57.023 ' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:57.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.023 --rc genhtml_branch_coverage=1 00:06:57.023 --rc genhtml_function_coverage=1 00:06:57.023 --rc genhtml_legend=1 00:06:57.023 --rc geninfo_all_blocks=1 00:06:57.023 --rc geninfo_unexecuted_blocks=1 00:06:57.023 00:06:57.023 ' 00:06:57.023 18:05:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.023 18:05:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.023 18:05:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.023 18:05:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.023 18:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.023 18:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.023 18:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.023 18:05:15 -- paths/export.sh@5 -- # export PATH 00:06:57.023 18:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.023 18:05:15 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.023 18:05:15 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.023 18:05:15 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.023 18:05:15 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.023 18:05:15 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:06:57.023 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.023 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.023 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 ************************************ 00:06:57.023 START TEST dd_invalid_arguments 00:06:57.023 ************************************ 00:06:57.023 18:05:15 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:06:57.023 18:05:15 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:57.023 18:05:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.023 18:05:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:57.023 18:05:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.023 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.023 18:05:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.023 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.023 18:05:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.023 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.023 18:05:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.023 18:05:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.023 18:05:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:57.023 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:57.023 options: 00:06:57.023 -c, --config JSON config file (default none) 00:06:57.023 --json JSON config file (default none) 00:06:57.023 --json-ignore-init-errors 00:06:57.023 don't exit on invalid config entry 00:06:57.023 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:57.023 -g, --single-file-segments 00:06:57.023 force creating just one hugetlbfs file 00:06:57.023 -h, --help show this usage 00:06:57.023 -i, --shm-id shared memory ID (optional) 00:06:57.023 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:57.023 --lcores lcore to CPU mapping list. The list is in the format: 00:06:57.023 [<,lcores[@CPUs]>...] 00:06:57.023 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:57.023 Within the group, '-' is used for range separator, 00:06:57.023 ',' is used for single number separator. 00:06:57.023 '( )' can be omitted for single element group, 00:06:57.023 '@' can be omitted if cpus and lcores have the same value 00:06:57.023 -n, --mem-channels channel number of memory channels used for DPDK 00:06:57.023 -p, --main-core main (primary) core for DPDK 00:06:57.023 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:57.023 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:57.023 --disable-cpumask-locks Disable CPU core lock files. 00:06:57.023 --silence-noticelog disable notice level logging to stderr 00:06:57.023 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:57.023 -u, --no-pci disable PCI access 00:06:57.023 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:57.023 --max-delay maximum reactor delay (in microseconds) 00:06:57.023 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:57.023 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:57.023 -R, --huge-unlink unlink huge files after initialization 00:06:57.023 -v, --version print SPDK version 00:06:57.023 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:57.023 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:57.024 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:57.024 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:57.024 Tracepoints vary in size and can use more than one trace entry. 00:06:57.024 --rpcs-allowed comma-separated list of permitted RPCS 00:06:57.024 --env-context Opaque context for use of the env implementation 00:06:57.024 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:57.024 --no-huge run without using hugepages 00:06:57.024 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:57.024 -e, --tpoint-group [:] 00:06:57.024 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:06:57.024 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:57.024 [2024-11-18 18:05:15.536841] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:06:57.024 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:57.024 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:57.024 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:57.024 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:57.024 [--------- DD Options ---------] 00:06:57.024 --if Input file. Must specify either --if or --ib. 00:06:57.024 --ib Input bdev. Must specifier either --if or --ib 00:06:57.024 --of Output file. Must specify either --of or --ob. 00:06:57.024 --ob Output bdev. Must specify either --of or --ob. 00:06:57.024 --iflag Input file flags. 00:06:57.024 --oflag Output file flags. 00:06:57.024 --bs I/O unit size (default: 4096) 00:06:57.024 --qd Queue depth (default: 2) 00:06:57.024 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:57.024 --skip Skip this many I/O units at start of input. (default: 0) 00:06:57.024 --seek Skip this many I/O units at start of output. (default: 0) 00:06:57.024 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:57.024 --sparse Enable hole skipping in input target 00:06:57.024 Available iflag and oflag values: 00:06:57.024 append - append mode 00:06:57.024 direct - use direct I/O for data 00:06:57.024 directory - fail unless a directory 00:06:57.024 dsync - use synchronized I/O for data 00:06:57.024 noatime - do not update access time 00:06:57.024 noctty - do not assign controlling terminal from file 00:06:57.024 nofollow - do not follow symlinks 00:06:57.024 nonblock - use non-blocking I/O 00:06:57.024 sync - use synchronized I/O for data and metadata 00:06:57.024 18:05:15 -- common/autotest_common.sh@653 -- # es=2 00:06:57.024 18:05:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.024 18:05:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.024 18:05:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.024 00:06:57.024 real 0m0.069s 00:06:57.024 user 0m0.036s 00:06:57.024 sys 0m0.032s 00:06:57.024 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.024 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.024 ************************************ 00:06:57.024 END TEST dd_invalid_arguments 00:06:57.024 ************************************ 00:06:57.024 18:05:15 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:06:57.024 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.024 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.024 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.024 ************************************ 00:06:57.024 START TEST dd_double_input 00:06:57.024 ************************************ 00:06:57.024 18:05:15 -- common/autotest_common.sh@1114 -- # double_input 00:06:57.024 18:05:15 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:57.024 18:05:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.024 18:05:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:57.024 18:05:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.024 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.024 18:05:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.024 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.024 18:05:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.024 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.024 18:05:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.024 18:05:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.024 18:05:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:57.297 [2024-11-18 18:05:15.656019] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:57.297 18:05:15 -- common/autotest_common.sh@653 -- # es=22 00:06:57.297 18:05:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.297 18:05:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.297 18:05:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.297 00:06:57.297 real 0m0.069s 00:06:57.297 user 0m0.045s 00:06:57.297 sys 0m0.023s 00:06:57.297 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.297 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 END TEST dd_double_input 00:06:57.297 ************************************ 00:06:57.297 18:05:15 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:06:57.297 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.297 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.297 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 START TEST dd_double_output 00:06:57.297 ************************************ 00:06:57.297 18:05:15 -- common/autotest_common.sh@1114 -- # double_output 00:06:57.297 18:05:15 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:57.297 18:05:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.297 18:05:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:57.297 18:05:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.297 18:05:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:57.297 [2024-11-18 18:05:15.775953] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:57.297 18:05:15 -- common/autotest_common.sh@653 -- # es=22 00:06:57.297 18:05:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.297 18:05:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.297 18:05:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.297 00:06:57.297 real 0m0.067s 00:06:57.297 user 0m0.047s 00:06:57.297 sys 0m0.019s 00:06:57.297 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.297 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 END TEST dd_double_output 00:06:57.297 ************************************ 00:06:57.297 18:05:15 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:06:57.297 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.297 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.297 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.297 ************************************ 00:06:57.297 START TEST dd_no_input 00:06:57.297 ************************************ 00:06:57.297 18:05:15 -- common/autotest_common.sh@1114 -- # no_input 00:06:57.297 18:05:15 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:57.297 18:05:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.297 18:05:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:57.297 18:05:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.297 18:05:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.297 18:05:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:57.297 [2024-11-18 18:05:15.896620] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:06:57.556 18:05:15 -- common/autotest_common.sh@653 -- # es=22 00:06:57.556 18:05:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.556 18:05:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.556 18:05:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.556 00:06:57.556 real 0m0.069s 00:06:57.556 user 0m0.048s 00:06:57.556 sys 0m0.020s 00:06:57.556 18:05:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.556 ************************************ 00:06:57.556 END TEST dd_no_input 00:06:57.556 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 ************************************ 00:06:57.556 18:05:15 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:06:57.556 18:05:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.556 18:05:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.556 18:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 ************************************ 00:06:57.556 START TEST dd_no_output 00:06:57.556 ************************************ 00:06:57.556 18:05:15 -- common/autotest_common.sh@1114 -- # no_output 00:06:57.556 18:05:15 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.556 18:05:15 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.556 18:05:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.556 18:05:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.556 18:05:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.556 [2024-11-18 18:05:16.017672] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:06:57.556 18:05:16 -- common/autotest_common.sh@653 -- # es=22 00:06:57.556 18:05:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.556 18:05:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.556 18:05:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.556 00:06:57.556 real 0m0.072s 00:06:57.556 user 0m0.043s 00:06:57.556 sys 0m0.028s 00:06:57.556 18:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.556 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 ************************************ 00:06:57.556 END TEST dd_no_output 00:06:57.556 ************************************ 00:06:57.556 18:05:16 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:57.556 18:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.556 18:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.556 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 ************************************ 00:06:57.556 START TEST dd_wrong_blocksize 00:06:57.556 ************************************ 00:06:57.556 18:05:16 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:06:57.556 18:05:16 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:57.556 18:05:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.556 18:05:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:57.556 18:05:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.556 18:05:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.556 18:05:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.556 18:05:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:57.556 [2024-11-18 18:05:16.139689] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:06:57.556 18:05:16 -- common/autotest_common.sh@653 -- # es=22 00:06:57.556 18:05:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.556 18:05:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.556 18:05:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.556 00:06:57.556 real 0m0.072s 00:06:57.556 user 0m0.044s 00:06:57.556 sys 0m0.027s 00:06:57.556 18:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.556 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 ************************************ 00:06:57.556 END TEST dd_wrong_blocksize 00:06:57.556 ************************************ 00:06:57.815 18:05:16 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:57.815 18:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.815 18:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.815 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:57.815 ************************************ 00:06:57.815 START TEST dd_smaller_blocksize 00:06:57.815 ************************************ 00:06:57.815 18:05:16 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:06:57.815 18:05:16 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:57.815 18:05:16 -- common/autotest_common.sh@650 -- # local es=0 00:06:57.815 18:05:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:57.815 18:05:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.815 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.815 18:05:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.815 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.815 18:05:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.815 18:05:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.815 18:05:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.815 18:05:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.815 18:05:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:57.815 [2024-11-18 18:05:16.261580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.816 [2024-11-18 18:05:16.261675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59638 ] 00:06:57.816 [2024-11-18 18:05:16.400407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.075 [2024-11-18 18:05:16.467498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.334 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:58.334 [2024-11-18 18:05:16.793159] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:58.334 [2024-11-18 18:05:16.793229] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.334 [2024-11-18 18:05:16.854743] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:58.593 18:05:16 -- common/autotest_common.sh@653 -- # es=244 00:06:58.593 18:05:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.593 18:05:16 -- common/autotest_common.sh@662 -- # es=116 00:06:58.593 18:05:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.593 18:05:16 -- common/autotest_common.sh@670 -- # es=1 00:06:58.593 18:05:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.593 00:06:58.593 real 0m0.742s 00:06:58.593 user 0m0.332s 00:06:58.593 sys 0m0.305s 00:06:58.593 18:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.593 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.593 ************************************ 00:06:58.593 END TEST dd_smaller_blocksize 00:06:58.593 ************************************ 00:06:58.593 18:05:16 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:06:58.593 18:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.593 18:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.593 18:05:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.593 ************************************ 00:06:58.593 START TEST dd_invalid_count 00:06:58.593 ************************************ 00:06:58.593 18:05:16 -- common/autotest_common.sh@1114 -- # invalid_count 00:06:58.593 18:05:17 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:58.593 18:05:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.593 18:05:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:58.593 18:05:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.593 18:05:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:58.593 [2024-11-18 18:05:17.053944] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:06:58.593 18:05:17 -- common/autotest_common.sh@653 -- # es=22 00:06:58.593 18:05:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.593 18:05:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.593 18:05:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.593 00:06:58.593 real 0m0.070s 00:06:58.593 user 0m0.045s 00:06:58.593 sys 0m0.024s 00:06:58.593 18:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.593 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.593 ************************************ 00:06:58.593 END TEST dd_invalid_count 00:06:58.593 ************************************ 00:06:58.593 18:05:17 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:06:58.593 18:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.593 18:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.593 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.593 ************************************ 00:06:58.593 START TEST dd_invalid_oflag 00:06:58.593 ************************************ 00:06:58.593 18:05:17 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:06:58.593 18:05:17 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:58.593 18:05:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.593 18:05:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:58.593 18:05:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.593 18:05:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.593 18:05:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:58.593 [2024-11-18 18:05:17.173313] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:06:58.593 18:05:17 -- common/autotest_common.sh@653 -- # es=22 00:06:58.593 18:05:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.593 18:05:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.593 18:05:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.593 00:06:58.593 real 0m0.070s 00:06:58.593 user 0m0.048s 00:06:58.593 sys 0m0.021s 00:06:58.593 18:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.593 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.593 ************************************ 00:06:58.593 END TEST dd_invalid_oflag 00:06:58.593 ************************************ 00:06:58.852 18:05:17 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:06:58.852 18:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.852 18:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.852 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.852 ************************************ 00:06:58.852 START TEST dd_invalid_iflag 00:06:58.852 ************************************ 00:06:58.852 18:05:17 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:06:58.852 18:05:17 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:58.853 18:05:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.853 18:05:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:58.853 18:05:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.853 18:05:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:58.853 [2024-11-18 18:05:17.292210] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:06:58.853 18:05:17 -- common/autotest_common.sh@653 -- # es=22 00:06:58.853 18:05:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.853 18:05:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.853 18:05:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.853 00:06:58.853 real 0m0.069s 00:06:58.853 user 0m0.041s 00:06:58.853 sys 0m0.027s 00:06:58.853 18:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.853 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.853 ************************************ 00:06:58.853 END TEST dd_invalid_iflag 00:06:58.853 ************************************ 00:06:58.853 18:05:17 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:06:58.853 18:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.853 18:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.853 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.853 ************************************ 00:06:58.853 START TEST dd_unknown_flag 00:06:58.853 ************************************ 00:06:58.853 18:05:17 -- common/autotest_common.sh@1114 -- # unknown_flag 00:06:58.853 18:05:17 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:58.853 18:05:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:58.853 18:05:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:58.853 18:05:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.853 18:05:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.853 18:05:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:58.853 [2024-11-18 18:05:17.412217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.853 [2024-11-18 18:05:17.412311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:06:59.112 [2024-11-18 18:05:17.550264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.112 [2024-11-18 18:05:17.596436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.112 [2024-11-18 18:05:17.637621] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:06:59.112 [2024-11-18 18:05:17.637687] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:06:59.112 [2024-11-18 18:05:17.637721] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:06:59.112 [2024-11-18 18:05:17.637746] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.112 [2024-11-18 18:05:17.694465] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.371 18:05:17 -- common/autotest_common.sh@653 -- # es=236 00:06:59.371 18:05:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.372 18:05:17 -- common/autotest_common.sh@662 -- # es=108 00:06:59.372 18:05:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.372 18:05:17 -- common/autotest_common.sh@670 -- # es=1 00:06:59.372 18:05:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.372 00:06:59.372 real 0m0.443s 00:06:59.372 user 0m0.255s 00:06:59.372 sys 0m0.084s 00:06:59.372 18:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.372 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 ************************************ 00:06:59.372 END TEST dd_unknown_flag 00:06:59.372 ************************************ 00:06:59.372 18:05:17 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:06:59.372 18:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.372 18:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.372 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 ************************************ 00:06:59.372 START TEST dd_invalid_json 00:06:59.372 ************************************ 00:06:59.372 18:05:17 -- common/autotest_common.sh@1114 -- # invalid_json 00:06:59.372 18:05:17 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:59.372 18:05:17 -- common/autotest_common.sh@650 -- # local es=0 00:06:59.372 18:05:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:59.372 18:05:17 -- dd/negative_dd.sh@95 -- # : 00:06:59.372 18:05:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.372 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.372 18:05:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.372 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.372 18:05:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.372 18:05:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.372 18:05:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.372 18:05:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.372 18:05:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:59.372 [2024-11-18 18:05:17.904730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.372 [2024-11-18 18:05:17.904826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59752 ] 00:06:59.631 [2024-11-18 18:05:18.043236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.631 [2024-11-18 18:05:18.099506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.631 [2024-11-18 18:05:18.099668] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:06:59.631 [2024-11-18 18:05:18.099686] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.631 [2024-11-18 18:05:18.099723] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.631 18:05:18 -- common/autotest_common.sh@653 -- # es=234 00:06:59.631 18:05:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.631 18:05:18 -- common/autotest_common.sh@662 -- # es=106 00:06:59.631 18:05:18 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:59.631 18:05:18 -- common/autotest_common.sh@670 -- # es=1 00:06:59.631 18:05:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.631 00:06:59.631 real 0m0.340s 00:06:59.631 user 0m0.182s 00:06:59.631 sys 0m0.056s 00:06:59.631 18:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.631 ************************************ 00:06:59.631 END TEST dd_invalid_json 00:06:59.631 ************************************ 00:06:59.631 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.631 00:06:59.631 real 0m2.941s 00:06:59.631 user 0m1.488s 00:06:59.631 sys 0m1.088s 00:06:59.631 18:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.631 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.631 ************************************ 00:06:59.631 END TEST spdk_dd_negative 00:06:59.631 ************************************ 00:06:59.890 00:06:59.890 real 1m5.348s 00:06:59.890 user 0m40.701s 00:06:59.890 sys 0m15.496s 00:06:59.890 18:05:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.890 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.890 ************************************ 00:06:59.890 END TEST spdk_dd 00:06:59.890 ************************************ 00:06:59.890 18:05:18 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@255 -- # timing_exit lib 00:06:59.890 18:05:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:59.890 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.890 18:05:18 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:06:59.890 18:05:18 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:06:59.890 18:05:18 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.890 18:05:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:59.890 18:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.890 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.890 ************************************ 00:06:59.890 START TEST nvmf_tcp 00:06:59.890 ************************************ 00:06:59.890 18:05:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:59.890 * Looking for test storage... 00:06:59.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:59.890 18:05:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:59.890 18:05:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:59.890 18:05:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:00.150 18:05:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:00.150 18:05:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:00.150 18:05:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:00.150 18:05:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:00.150 18:05:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:00.150 18:05:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:00.150 18:05:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.150 18:05:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:00.150 18:05:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:00.150 18:05:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:00.150 18:05:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:00.150 18:05:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:00.150 18:05:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:00.150 18:05:18 -- scripts/common.sh@344 -- # : 1 00:07:00.150 18:05:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:00.150 18:05:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.150 18:05:18 -- scripts/common.sh@364 -- # decimal 1 00:07:00.150 18:05:18 -- scripts/common.sh@352 -- # local d=1 00:07:00.150 18:05:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.150 18:05:18 -- scripts/common.sh@354 -- # echo 1 00:07:00.150 18:05:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:00.150 18:05:18 -- scripts/common.sh@365 -- # decimal 2 00:07:00.150 18:05:18 -- scripts/common.sh@352 -- # local d=2 00:07:00.150 18:05:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.150 18:05:18 -- scripts/common.sh@354 -- # echo 2 00:07:00.150 18:05:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:00.150 18:05:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:00.150 18:05:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:00.150 18:05:18 -- scripts/common.sh@367 -- # return 0 00:07:00.150 18:05:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.150 18:05:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.150 --rc genhtml_branch_coverage=1 00:07:00.150 --rc genhtml_function_coverage=1 00:07:00.150 --rc genhtml_legend=1 00:07:00.150 --rc geninfo_all_blocks=1 00:07:00.150 --rc geninfo_unexecuted_blocks=1 00:07:00.150 00:07:00.150 ' 00:07:00.150 18:05:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.150 --rc genhtml_branch_coverage=1 00:07:00.150 --rc genhtml_function_coverage=1 00:07:00.150 --rc genhtml_legend=1 00:07:00.150 --rc geninfo_all_blocks=1 00:07:00.150 --rc geninfo_unexecuted_blocks=1 00:07:00.150 00:07:00.150 ' 00:07:00.150 18:05:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.150 --rc genhtml_branch_coverage=1 00:07:00.150 --rc genhtml_function_coverage=1 00:07:00.150 --rc genhtml_legend=1 00:07:00.150 --rc geninfo_all_blocks=1 00:07:00.150 --rc geninfo_unexecuted_blocks=1 00:07:00.150 00:07:00.150 ' 00:07:00.150 18:05:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:00.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.150 --rc genhtml_branch_coverage=1 00:07:00.150 --rc genhtml_function_coverage=1 00:07:00.150 --rc genhtml_legend=1 00:07:00.150 --rc geninfo_all_blocks=1 00:07:00.150 --rc geninfo_unexecuted_blocks=1 00:07:00.150 00:07:00.150 ' 00:07:00.150 18:05:18 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.151 18:05:18 -- nvmf/common.sh@7 -- # uname -s 00:07:00.151 18:05:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.151 18:05:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.151 18:05:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.151 18:05:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.151 18:05:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.151 18:05:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.151 18:05:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.151 18:05:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.151 18:05:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.151 18:05:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.151 18:05:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:00.151 18:05:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:00.151 18:05:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.151 18:05:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.151 18:05:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.151 18:05:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.151 18:05:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.151 18:05:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.151 18:05:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.151 18:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.151 18:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.151 18:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.151 18:05:18 -- paths/export.sh@5 -- # export PATH 00:07:00.151 18:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.151 18:05:18 -- nvmf/common.sh@46 -- # : 0 00:07:00.151 18:05:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:00.151 18:05:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:00.151 18:05:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:00.151 18:05:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.151 18:05:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.151 18:05:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:00.151 18:05:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:00.151 18:05:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:00.151 18:05:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.151 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:00.151 18:05:18 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:00.151 18:05:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:00.151 18:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.151 18:05:18 -- common/autotest_common.sh@10 -- # set +x 00:07:00.151 ************************************ 00:07:00.151 START TEST nvmf_host_management 00:07:00.151 ************************************ 00:07:00.151 18:05:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:00.151 * Looking for test storage... 00:07:00.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.151 18:05:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:00.151 18:05:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:00.151 18:05:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:00.151 18:05:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:00.151 18:05:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:00.151 18:05:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:00.151 18:05:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:00.151 18:05:18 -- scripts/common.sh@335 -- # IFS=.-: 00:07:00.151 18:05:18 -- scripts/common.sh@335 -- # read -ra ver1 00:07:00.151 18:05:18 -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.151 18:05:18 -- scripts/common.sh@336 -- # read -ra ver2 00:07:00.151 18:05:18 -- scripts/common.sh@337 -- # local 'op=<' 00:07:00.151 18:05:18 -- scripts/common.sh@339 -- # ver1_l=2 00:07:00.151 18:05:18 -- scripts/common.sh@340 -- # ver2_l=1 00:07:00.151 18:05:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:00.151 18:05:18 -- scripts/common.sh@343 -- # case "$op" in 00:07:00.151 18:05:18 -- scripts/common.sh@344 -- # : 1 00:07:00.151 18:05:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:00.151 18:05:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.151 18:05:18 -- scripts/common.sh@364 -- # decimal 1 00:07:00.411 18:05:18 -- scripts/common.sh@352 -- # local d=1 00:07:00.411 18:05:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.411 18:05:18 -- scripts/common.sh@354 -- # echo 1 00:07:00.411 18:05:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:00.411 18:05:18 -- scripts/common.sh@365 -- # decimal 2 00:07:00.411 18:05:18 -- scripts/common.sh@352 -- # local d=2 00:07:00.411 18:05:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.411 18:05:18 -- scripts/common.sh@354 -- # echo 2 00:07:00.411 18:05:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:00.411 18:05:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:00.411 18:05:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:00.411 18:05:18 -- scripts/common.sh@367 -- # return 0 00:07:00.411 18:05:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.411 18:05:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:00.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.411 --rc genhtml_branch_coverage=1 00:07:00.411 --rc genhtml_function_coverage=1 00:07:00.411 --rc genhtml_legend=1 00:07:00.411 --rc geninfo_all_blocks=1 00:07:00.411 --rc geninfo_unexecuted_blocks=1 00:07:00.411 00:07:00.411 ' 00:07:00.411 18:05:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:00.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.411 --rc genhtml_branch_coverage=1 00:07:00.411 --rc genhtml_function_coverage=1 00:07:00.411 --rc genhtml_legend=1 00:07:00.411 --rc geninfo_all_blocks=1 00:07:00.411 --rc geninfo_unexecuted_blocks=1 00:07:00.411 00:07:00.411 ' 00:07:00.411 18:05:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:00.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.411 --rc genhtml_branch_coverage=1 00:07:00.411 --rc genhtml_function_coverage=1 00:07:00.411 --rc genhtml_legend=1 00:07:00.411 --rc geninfo_all_blocks=1 00:07:00.411 --rc geninfo_unexecuted_blocks=1 00:07:00.411 00:07:00.411 ' 00:07:00.411 18:05:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:00.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.411 --rc genhtml_branch_coverage=1 00:07:00.411 --rc genhtml_function_coverage=1 00:07:00.411 --rc genhtml_legend=1 00:07:00.411 --rc geninfo_all_blocks=1 00:07:00.411 --rc geninfo_unexecuted_blocks=1 00:07:00.411 00:07:00.411 ' 00:07:00.411 18:05:18 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.411 18:05:18 -- nvmf/common.sh@7 -- # uname -s 00:07:00.411 18:05:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.411 18:05:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.411 18:05:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.411 18:05:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.411 18:05:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.411 18:05:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.411 18:05:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.411 18:05:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.411 18:05:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.411 18:05:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:00.411 18:05:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:00.411 18:05:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.411 18:05:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.411 18:05:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.411 18:05:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.411 18:05:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.411 18:05:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.411 18:05:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.411 18:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.411 18:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.411 18:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.411 18:05:18 -- paths/export.sh@5 -- # export PATH 00:07:00.411 18:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.411 18:05:18 -- nvmf/common.sh@46 -- # : 0 00:07:00.411 18:05:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:00.411 18:05:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:00.411 18:05:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:00.411 18:05:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.411 18:05:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.411 18:05:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:00.411 18:05:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:00.411 18:05:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:00.411 18:05:18 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:00.411 18:05:18 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:00.411 18:05:18 -- target/host_management.sh@104 -- # nvmftestinit 00:07:00.411 18:05:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:00.411 18:05:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.411 18:05:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:00.411 18:05:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:00.411 18:05:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:00.411 18:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.411 18:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.411 18:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.411 18:05:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:00.411 18:05:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:00.411 18:05:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.411 18:05:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.411 18:05:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:00.411 18:05:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:00.411 18:05:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.411 18:05:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.411 18:05:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.411 18:05:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.411 18:05:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.411 18:05:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.411 18:05:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.411 18:05:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.411 18:05:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:00.411 Cannot find device "nvmf_init_br" 00:07:00.411 18:05:18 -- nvmf/common.sh@153 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:00.411 Cannot find device "nvmf_tgt_br" 00:07:00.411 18:05:18 -- nvmf/common.sh@154 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.411 Cannot find device "nvmf_tgt_br2" 00:07:00.411 18:05:18 -- nvmf/common.sh@155 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:00.411 Cannot find device "nvmf_init_br" 00:07:00.411 18:05:18 -- nvmf/common.sh@156 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:00.411 Cannot find device "nvmf_tgt_br" 00:07:00.411 18:05:18 -- nvmf/common.sh@157 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:00.411 Cannot find device "nvmf_tgt_br2" 00:07:00.411 18:05:18 -- nvmf/common.sh@158 -- # true 00:07:00.411 18:05:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:00.411 Cannot find device "nvmf_br" 00:07:00.411 18:05:18 -- nvmf/common.sh@159 -- # true 00:07:00.412 18:05:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:00.412 Cannot find device "nvmf_init_if" 00:07:00.412 18:05:18 -- nvmf/common.sh@160 -- # true 00:07:00.412 18:05:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.412 18:05:18 -- nvmf/common.sh@161 -- # true 00:07:00.412 18:05:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.412 18:05:18 -- nvmf/common.sh@162 -- # true 00:07:00.412 18:05:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.412 18:05:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.412 18:05:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.412 18:05:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.412 18:05:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.412 18:05:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.412 18:05:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.412 18:05:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:00.412 18:05:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:00.412 18:05:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:00.412 18:05:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:00.412 18:05:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:00.412 18:05:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:00.412 18:05:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.412 18:05:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.671 18:05:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.671 18:05:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:00.671 18:05:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:00.671 18:05:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.671 18:05:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.671 18:05:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.671 18:05:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.671 18:05:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.671 18:05:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:00.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:07:00.671 00:07:00.671 --- 10.0.0.2 ping statistics --- 00:07:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.671 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:00.671 18:05:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:00.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:07:00.671 00:07:00.671 --- 10.0.0.3 ping statistics --- 00:07:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.671 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:00.671 18:05:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:00.671 00:07:00.671 --- 10.0.0.1 ping statistics --- 00:07:00.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.671 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:00.671 18:05:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.671 18:05:19 -- nvmf/common.sh@421 -- # return 0 00:07:00.671 18:05:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:00.671 18:05:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.671 18:05:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:00.672 18:05:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:00.672 18:05:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.672 18:05:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:00.672 18:05:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:00.672 18:05:19 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:00.672 18:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.672 18:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.672 18:05:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.672 ************************************ 00:07:00.672 START TEST nvmf_host_management 00:07:00.672 ************************************ 00:07:00.672 18:05:19 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:07:00.672 18:05:19 -- target/host_management.sh@69 -- # starttarget 00:07:00.672 18:05:19 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:00.672 18:05:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:00.672 18:05:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.672 18:05:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.672 18:05:19 -- nvmf/common.sh@469 -- # nvmfpid=60035 00:07:00.672 18:05:19 -- nvmf/common.sh@470 -- # waitforlisten 60035 00:07:00.672 18:05:19 -- common/autotest_common.sh@829 -- # '[' -z 60035 ']' 00:07:00.672 18:05:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.672 18:05:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:00.672 18:05:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.672 18:05:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.672 18:05:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.672 18:05:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.672 [2024-11-18 18:05:19.248007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.672 [2024-11-18 18:05:19.248093] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.930 [2024-11-18 18:05:19.390645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.930 [2024-11-18 18:05:19.462308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.931 [2024-11-18 18:05:19.462797] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.931 [2024-11-18 18:05:19.462824] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.931 [2024-11-18 18:05:19.462836] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.931 [2024-11-18 18:05:19.462908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.931 [2024-11-18 18:05:19.462988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.931 [2024-11-18 18:05:19.463170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.931 [2024-11-18 18:05:19.463186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.867 18:05:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.867 18:05:20 -- common/autotest_common.sh@862 -- # return 0 00:07:01.867 18:05:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:01.867 18:05:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 18:05:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.867 18:05:20 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.867 18:05:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 [2024-11-18 18:05:20.332917] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.867 18:05:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.867 18:05:20 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:01.867 18:05:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 18:05:20 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:01.867 18:05:20 -- target/host_management.sh@23 -- # cat 00:07:01.867 18:05:20 -- target/host_management.sh@30 -- # rpc_cmd 00:07:01.867 18:05:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 Malloc0 00:07:01.867 [2024-11-18 18:05:20.405784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.867 18:05:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.867 18:05:20 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:01.867 18:05:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.867 18:05:20 -- target/host_management.sh@73 -- # perfpid=60089 00:07:01.867 18:05:20 -- target/host_management.sh@74 -- # waitforlisten 60089 /var/tmp/bdevperf.sock 00:07:01.867 18:05:20 -- common/autotest_common.sh@829 -- # '[' -z 60089 ']' 00:07:01.867 18:05:20 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:01.867 18:05:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.867 18:05:20 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:01.867 18:05:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.867 18:05:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.867 18:05:20 -- nvmf/common.sh@520 -- # config=() 00:07:01.867 18:05:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.867 18:05:20 -- nvmf/common.sh@520 -- # local subsystem config 00:07:01.867 18:05:20 -- common/autotest_common.sh@10 -- # set +x 00:07:01.867 18:05:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:01.867 18:05:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:01.867 { 00:07:01.867 "params": { 00:07:01.867 "name": "Nvme$subsystem", 00:07:01.867 "trtype": "$TEST_TRANSPORT", 00:07:01.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:01.867 "adrfam": "ipv4", 00:07:01.867 "trsvcid": "$NVMF_PORT", 00:07:01.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:01.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:01.867 "hdgst": ${hdgst:-false}, 00:07:01.867 "ddgst": ${ddgst:-false} 00:07:01.867 }, 00:07:01.867 "method": "bdev_nvme_attach_controller" 00:07:01.867 } 00:07:01.867 EOF 00:07:01.867 )") 00:07:01.867 18:05:20 -- nvmf/common.sh@542 -- # cat 00:07:01.867 18:05:20 -- nvmf/common.sh@544 -- # jq . 00:07:01.867 18:05:20 -- nvmf/common.sh@545 -- # IFS=, 00:07:01.867 18:05:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:01.867 "params": { 00:07:01.867 "name": "Nvme0", 00:07:01.867 "trtype": "tcp", 00:07:01.867 "traddr": "10.0.0.2", 00:07:01.867 "adrfam": "ipv4", 00:07:01.867 "trsvcid": "4420", 00:07:01.867 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.867 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:01.867 "hdgst": false, 00:07:01.867 "ddgst": false 00:07:01.867 }, 00:07:01.867 "method": "bdev_nvme_attach_controller" 00:07:01.867 }' 00:07:02.127 [2024-11-18 18:05:20.506627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.127 [2024-11-18 18:05:20.507319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60089 ] 00:07:02.127 [2024-11-18 18:05:20.646850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.127 [2024-11-18 18:05:20.713505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.386 Running I/O for 10 seconds... 00:07:02.955 18:05:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.955 18:05:21 -- common/autotest_common.sh@862 -- # return 0 00:07:02.955 18:05:21 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:02.955 18:05:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.955 18:05:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.955 18:05:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.955 18:05:21 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:02.955 18:05:21 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:02.955 18:05:21 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:02.955 18:05:21 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:02.955 18:05:21 -- target/host_management.sh@52 -- # local ret=1 00:07:02.955 18:05:21 -- target/host_management.sh@53 -- # local i 00:07:02.955 18:05:21 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:02.955 18:05:21 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:02.955 18:05:21 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:02.955 18:05:21 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:02.955 18:05:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.955 18:05:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.955 18:05:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.217 18:05:21 -- target/host_management.sh@55 -- # read_io_count=1994 00:07:03.217 18:05:21 -- target/host_management.sh@58 -- # '[' 1994 -ge 100 ']' 00:07:03.217 18:05:21 -- target/host_management.sh@59 -- # ret=0 00:07:03.217 18:05:21 -- target/host_management.sh@60 -- # break 00:07:03.217 18:05:21 -- target/host_management.sh@64 -- # return 0 00:07:03.217 18:05:21 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:03.217 18:05:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.217 18:05:21 -- common/autotest_common.sh@10 -- # set +x 00:07:03.217 18:05:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.217 18:05:21 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:03.217 18:05:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.217 18:05:21 -- common/autotest_common.sh@10 -- # set +x 00:07:03.217 [2024-11-18 18:05:21.594241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.595043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.595128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.595322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.595478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.595588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.595681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.595880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.596005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.596076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.596132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.596195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.596271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.596460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.596604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.596680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.596752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.596835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.597133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.597272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.597414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.597516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.597855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.597930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.598019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.598113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.598293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.598386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.598455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.598559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.598669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.598975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.599042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.599116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.599181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 18:05:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.217 [2024-11-18 18:05:21.599389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 18:05:21 -- target/host_management.sh@87 -- # sleep 1 00:07:03.217 [2024-11-18 18:05:21.599471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.599571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.599648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.599723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.599795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.600106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.600263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.600414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.600705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.600880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.600957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.601024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.601121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.601327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.217 [2024-11-18 18:05:21.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.217 [2024-11-18 18:05:21.601488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.601587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.601663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.601886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.601986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.602188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.602259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.602441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.602591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.602675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.602745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.602830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.603083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.603160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.603214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.603285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.603354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.603421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.603653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.603761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.603833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.603914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.604007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.604087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.604273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.604350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.604416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.604487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.604570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.604769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.604855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.604959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.605024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.605099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.605152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.605317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.605397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.605470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.605579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.605794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.605893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.605967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.606062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.606142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.606322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.606398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.606464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.606559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.606678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.606864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.606978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.607118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.607304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.607396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.607464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.607626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.607721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.607898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.608008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:03.218 [2024-11-18 18:05:21.608083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.608148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cc400 is same with the state(5) to be set 00:07:03.218 [2024-11-18 18:05:21.608292] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14cc400 was disconnected and freed. reset controller. 00:07:03.218 [2024-11-18 18:05:21.608628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.218 [2024-11-18 18:05:21.608751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.608816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.218 [2024-11-18 18:05:21.609033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.609123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.218 [2024-11-18 18:05:21.609192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.609245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:03.218 [2024-11-18 18:05:21.609315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:03.218 [2024-11-18 18:05:21.609379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f2150 is same with the state(5) to be set 00:07:03.218 [2024-11-18 18:05:21.610524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:03.218 task offset: 15872 on job bdev=Nvme0n1 fails 00:07:03.218 00:07:03.218 Latency(us) 00:07:03.218 [2024-11-18T18:05:21.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.218 [2024-11-18T18:05:21.822Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:03.218 [2024-11-18T18:05:21.822Z] Job: Nvme0n1 ended in about 0.75 seconds with error 00:07:03.218 Verification LBA range: start 0x0 length 0x400 00:07:03.218 Nvme0n1 : 0.75 2888.60 180.54 85.27 0.00 21172.16 8519.68 31933.91 00:07:03.218 [2024-11-18T18:05:21.822Z] =================================================================================================================== 00:07:03.218 [2024-11-18T18:05:21.822Z] Total : 2888.60 180.54 85.27 0.00 21172.16 8519.68 31933.91 00:07:03.218 [2024-11-18 18:05:21.612840] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.218 [2024-11-18 18:05:21.612987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2150 (9): Bad file descriptor 00:07:03.218 [2024-11-18 18:05:21.617910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:04.155 18:05:22 -- target/host_management.sh@91 -- # kill -9 60089 00:07:04.155 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60089) - No such process 00:07:04.155 18:05:22 -- target/host_management.sh@91 -- # true 00:07:04.155 18:05:22 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:04.155 18:05:22 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:04.155 18:05:22 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:04.155 18:05:22 -- nvmf/common.sh@520 -- # config=() 00:07:04.155 18:05:22 -- nvmf/common.sh@520 -- # local subsystem config 00:07:04.155 18:05:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:04.155 18:05:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:04.155 { 00:07:04.155 "params": { 00:07:04.155 "name": "Nvme$subsystem", 00:07:04.155 "trtype": "$TEST_TRANSPORT", 00:07:04.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:04.155 "adrfam": "ipv4", 00:07:04.155 "trsvcid": "$NVMF_PORT", 00:07:04.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:04.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:04.155 "hdgst": ${hdgst:-false}, 00:07:04.155 "ddgst": ${ddgst:-false} 00:07:04.155 }, 00:07:04.155 "method": "bdev_nvme_attach_controller" 00:07:04.155 } 00:07:04.155 EOF 00:07:04.155 )") 00:07:04.155 18:05:22 -- nvmf/common.sh@542 -- # cat 00:07:04.155 18:05:22 -- nvmf/common.sh@544 -- # jq . 00:07:04.155 18:05:22 -- nvmf/common.sh@545 -- # IFS=, 00:07:04.155 18:05:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:04.155 "params": { 00:07:04.155 "name": "Nvme0", 00:07:04.155 "trtype": "tcp", 00:07:04.155 "traddr": "10.0.0.2", 00:07:04.155 "adrfam": "ipv4", 00:07:04.155 "trsvcid": "4420", 00:07:04.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:04.155 "hdgst": false, 00:07:04.155 "ddgst": false 00:07:04.155 }, 00:07:04.155 "method": "bdev_nvme_attach_controller" 00:07:04.155 }' 00:07:04.155 [2024-11-18 18:05:22.653760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.155 [2024-11-18 18:05:22.654514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 00:07:04.414 [2024-11-18 18:05:22.790645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.414 [2024-11-18 18:05:22.843442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.414 Running I/O for 1 seconds... 00:07:05.793 00:07:05.793 Latency(us) 00:07:05.793 [2024-11-18T18:05:24.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.793 [2024-11-18T18:05:24.397Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:05.793 Verification LBA range: start 0x0 length 0x400 00:07:05.793 Nvme0n1 : 1.01 3051.37 190.71 0.00 0.00 20653.64 1243.69 25499.46 00:07:05.793 [2024-11-18T18:05:24.397Z] =================================================================================================================== 00:07:05.793 [2024-11-18T18:05:24.397Z] Total : 3051.37 190.71 0.00 0.00 20653.64 1243.69 25499.46 00:07:05.793 18:05:24 -- target/host_management.sh@101 -- # stoptarget 00:07:05.793 18:05:24 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:05.793 18:05:24 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:05.793 18:05:24 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:05.793 18:05:24 -- target/host_management.sh@40 -- # nvmftestfini 00:07:05.793 18:05:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:05.793 18:05:24 -- nvmf/common.sh@116 -- # sync 00:07:05.793 18:05:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:05.793 18:05:24 -- nvmf/common.sh@119 -- # set +e 00:07:05.793 18:05:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:05.793 18:05:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:05.793 rmmod nvme_tcp 00:07:05.793 rmmod nvme_fabrics 00:07:05.793 rmmod nvme_keyring 00:07:05.793 18:05:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:05.793 18:05:24 -- nvmf/common.sh@123 -- # set -e 00:07:05.793 18:05:24 -- nvmf/common.sh@124 -- # return 0 00:07:05.793 18:05:24 -- nvmf/common.sh@477 -- # '[' -n 60035 ']' 00:07:05.793 18:05:24 -- nvmf/common.sh@478 -- # killprocess 60035 00:07:05.793 18:05:24 -- common/autotest_common.sh@936 -- # '[' -z 60035 ']' 00:07:05.793 18:05:24 -- common/autotest_common.sh@940 -- # kill -0 60035 00:07:05.793 18:05:24 -- common/autotest_common.sh@941 -- # uname 00:07:05.793 18:05:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.793 18:05:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60035 00:07:05.793 killing process with pid 60035 00:07:05.793 18:05:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:05.793 18:05:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:05.793 18:05:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60035' 00:07:05.793 18:05:24 -- common/autotest_common.sh@955 -- # kill 60035 00:07:05.793 18:05:24 -- common/autotest_common.sh@960 -- # wait 60035 00:07:06.053 [2024-11-18 18:05:24.518705] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:06.053 18:05:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:06.053 18:05:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:06.053 18:05:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:06.053 18:05:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.053 18:05:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:06.053 18:05:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.053 18:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.053 18:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.053 18:05:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:06.053 00:07:06.053 real 0m5.395s 00:07:06.053 user 0m22.871s 00:07:06.053 sys 0m1.168s 00:07:06.053 18:05:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.053 ************************************ 00:07:06.053 END TEST nvmf_host_management 00:07:06.053 ************************************ 00:07:06.053 18:05:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.053 18:05:24 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:06.053 00:07:06.053 real 0m6.042s 00:07:06.053 user 0m23.072s 00:07:06.053 sys 0m1.416s 00:07:06.053 18:05:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.053 ************************************ 00:07:06.053 18:05:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.053 END TEST nvmf_host_management 00:07:06.053 ************************************ 00:07:06.313 18:05:24 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.313 18:05:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:06.313 18:05:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.313 18:05:24 -- common/autotest_common.sh@10 -- # set +x 00:07:06.313 ************************************ 00:07:06.313 START TEST nvmf_lvol 00:07:06.313 ************************************ 00:07:06.313 18:05:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.313 * Looking for test storage... 00:07:06.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:06.313 18:05:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:06.313 18:05:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:06.313 18:05:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:06.313 18:05:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:06.313 18:05:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:06.313 18:05:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:06.313 18:05:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:06.313 18:05:24 -- scripts/common.sh@335 -- # IFS=.-: 00:07:06.313 18:05:24 -- scripts/common.sh@335 -- # read -ra ver1 00:07:06.313 18:05:24 -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.313 18:05:24 -- scripts/common.sh@336 -- # read -ra ver2 00:07:06.313 18:05:24 -- scripts/common.sh@337 -- # local 'op=<' 00:07:06.313 18:05:24 -- scripts/common.sh@339 -- # ver1_l=2 00:07:06.313 18:05:24 -- scripts/common.sh@340 -- # ver2_l=1 00:07:06.313 18:05:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:06.313 18:05:24 -- scripts/common.sh@343 -- # case "$op" in 00:07:06.313 18:05:24 -- scripts/common.sh@344 -- # : 1 00:07:06.313 18:05:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:06.313 18:05:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.313 18:05:24 -- scripts/common.sh@364 -- # decimal 1 00:07:06.313 18:05:24 -- scripts/common.sh@352 -- # local d=1 00:07:06.313 18:05:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.313 18:05:24 -- scripts/common.sh@354 -- # echo 1 00:07:06.313 18:05:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:06.313 18:05:24 -- scripts/common.sh@365 -- # decimal 2 00:07:06.313 18:05:24 -- scripts/common.sh@352 -- # local d=2 00:07:06.313 18:05:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.313 18:05:24 -- scripts/common.sh@354 -- # echo 2 00:07:06.313 18:05:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:06.313 18:05:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:06.313 18:05:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:06.313 18:05:24 -- scripts/common.sh@367 -- # return 0 00:07:06.313 18:05:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.313 18:05:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:06.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.313 --rc genhtml_branch_coverage=1 00:07:06.313 --rc genhtml_function_coverage=1 00:07:06.313 --rc genhtml_legend=1 00:07:06.313 --rc geninfo_all_blocks=1 00:07:06.313 --rc geninfo_unexecuted_blocks=1 00:07:06.313 00:07:06.313 ' 00:07:06.313 18:05:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:06.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.314 --rc genhtml_branch_coverage=1 00:07:06.314 --rc genhtml_function_coverage=1 00:07:06.314 --rc genhtml_legend=1 00:07:06.314 --rc geninfo_all_blocks=1 00:07:06.314 --rc geninfo_unexecuted_blocks=1 00:07:06.314 00:07:06.314 ' 00:07:06.314 18:05:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:06.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.314 --rc genhtml_branch_coverage=1 00:07:06.314 --rc genhtml_function_coverage=1 00:07:06.314 --rc genhtml_legend=1 00:07:06.314 --rc geninfo_all_blocks=1 00:07:06.314 --rc geninfo_unexecuted_blocks=1 00:07:06.314 00:07:06.314 ' 00:07:06.314 18:05:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:06.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.314 --rc genhtml_branch_coverage=1 00:07:06.314 --rc genhtml_function_coverage=1 00:07:06.314 --rc genhtml_legend=1 00:07:06.314 --rc geninfo_all_blocks=1 00:07:06.314 --rc geninfo_unexecuted_blocks=1 00:07:06.314 00:07:06.314 ' 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:06.314 18:05:24 -- nvmf/common.sh@7 -- # uname -s 00:07:06.314 18:05:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.314 18:05:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.314 18:05:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.314 18:05:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.314 18:05:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.314 18:05:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.314 18:05:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.314 18:05:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.314 18:05:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.314 18:05:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:06.314 18:05:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:06.314 18:05:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.314 18:05:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.314 18:05:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:06.314 18:05:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.314 18:05:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.314 18:05:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.314 18:05:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.314 18:05:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.314 18:05:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.314 18:05:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.314 18:05:24 -- paths/export.sh@5 -- # export PATH 00:07:06.314 18:05:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.314 18:05:24 -- nvmf/common.sh@46 -- # : 0 00:07:06.314 18:05:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:06.314 18:05:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:06.314 18:05:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:06.314 18:05:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.314 18:05:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.314 18:05:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:06.314 18:05:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:06.314 18:05:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.314 18:05:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:06.314 18:05:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:06.314 18:05:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.314 18:05:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:06.314 18:05:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:06.314 18:05:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:06.314 18:05:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.314 18:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.314 18:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.314 18:05:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:06.314 18:05:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:06.314 18:05:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.314 18:05:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.314 18:05:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:06.314 18:05:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:06.314 18:05:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:06.314 18:05:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:06.314 18:05:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:06.314 18:05:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.314 18:05:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:06.314 18:05:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:06.314 18:05:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:06.314 18:05:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:06.314 18:05:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:06.314 18:05:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:06.314 Cannot find device "nvmf_tgt_br" 00:07:06.314 18:05:24 -- nvmf/common.sh@154 -- # true 00:07:06.314 18:05:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:06.314 Cannot find device "nvmf_tgt_br2" 00:07:06.314 18:05:24 -- nvmf/common.sh@155 -- # true 00:07:06.314 18:05:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:06.574 18:05:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:06.574 Cannot find device "nvmf_tgt_br" 00:07:06.574 18:05:24 -- nvmf/common.sh@157 -- # true 00:07:06.574 18:05:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:06.574 Cannot find device "nvmf_tgt_br2" 00:07:06.574 18:05:24 -- nvmf/common.sh@158 -- # true 00:07:06.574 18:05:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:06.574 18:05:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:06.574 18:05:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:06.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.574 18:05:25 -- nvmf/common.sh@161 -- # true 00:07:06.574 18:05:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:06.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:06.574 18:05:25 -- nvmf/common.sh@162 -- # true 00:07:06.574 18:05:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:06.574 18:05:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:06.574 18:05:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:06.574 18:05:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:06.574 18:05:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:06.574 18:05:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:06.574 18:05:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:06.574 18:05:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:06.574 18:05:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:06.574 18:05:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:06.574 18:05:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:06.574 18:05:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:06.574 18:05:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:06.574 18:05:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:06.574 18:05:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:06.574 18:05:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:06.574 18:05:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:06.574 18:05:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:06.574 18:05:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:06.574 18:05:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:06.574 18:05:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:06.574 18:05:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:06.574 18:05:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:06.574 18:05:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:06.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:07:06.574 00:07:06.574 --- 10.0.0.2 ping statistics --- 00:07:06.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.574 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:07:06.574 18:05:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:06.574 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:06.574 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:06.574 00:07:06.574 --- 10.0.0.3 ping statistics --- 00:07:06.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.574 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:06.574 18:05:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:06.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:06.574 00:07:06.574 --- 10.0.0.1 ping statistics --- 00:07:06.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.574 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:06.574 18:05:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.574 18:05:25 -- nvmf/common.sh@421 -- # return 0 00:07:06.574 18:05:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:06.574 18:05:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.574 18:05:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:06.574 18:05:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:06.574 18:05:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.574 18:05:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:06.574 18:05:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:06.833 18:05:25 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:06.833 18:05:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:06.833 18:05:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:06.833 18:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:06.833 18:05:25 -- nvmf/common.sh@469 -- # nvmfpid=60363 00:07:06.833 18:05:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:06.833 18:05:25 -- nvmf/common.sh@470 -- # waitforlisten 60363 00:07:06.833 18:05:25 -- common/autotest_common.sh@829 -- # '[' -z 60363 ']' 00:07:06.833 18:05:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.833 18:05:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.833 18:05:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.833 18:05:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.833 18:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:06.833 [2024-11-18 18:05:25.241807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.833 [2024-11-18 18:05:25.242102] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.833 [2024-11-18 18:05:25.385125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.093 [2024-11-18 18:05:25.455781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.093 [2024-11-18 18:05:25.456190] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.093 [2024-11-18 18:05:25.456370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.093 [2024-11-18 18:05:25.456649] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.093 [2024-11-18 18:05:25.456962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.093 [2024-11-18 18:05:25.457026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.093 [2024-11-18 18:05:25.457028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.661 18:05:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.661 18:05:26 -- common/autotest_common.sh@862 -- # return 0 00:07:07.661 18:05:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:07.661 18:05:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.661 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.920 18:05:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.920 18:05:26 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.920 [2024-11-18 18:05:26.511558] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.179 18:05:26 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:08.438 18:05:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:08.438 18:05:26 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:08.697 18:05:27 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:08.697 18:05:27 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:08.956 18:05:27 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:09.216 18:05:27 -- target/nvmf_lvol.sh@29 -- # lvs=bbc6f317-61ac-4589-94fc-457d210f4468 00:07:09.216 18:05:27 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bbc6f317-61ac-4589-94fc-457d210f4468 lvol 20 00:07:09.475 18:05:27 -- target/nvmf_lvol.sh@32 -- # lvol=93baa631-b390-4dbb-a9cb-edf353631bd4 00:07:09.475 18:05:27 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:09.734 18:05:28 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93baa631-b390-4dbb-a9cb-edf353631bd4 00:07:09.735 18:05:28 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.994 [2024-11-18 18:05:28.527468] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.994 18:05:28 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.254 18:05:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=60439 00:07:10.254 18:05:28 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:10.254 18:05:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:11.629 18:05:29 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 93baa631-b390-4dbb-a9cb-edf353631bd4 MY_SNAPSHOT 00:07:11.629 18:05:30 -- target/nvmf_lvol.sh@47 -- # snapshot=c9bcdc4f-052d-4569-9690-cdf30540398f 00:07:11.629 18:05:30 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 93baa631-b390-4dbb-a9cb-edf353631bd4 30 00:07:11.888 18:05:30 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone c9bcdc4f-052d-4569-9690-cdf30540398f MY_CLONE 00:07:12.146 18:05:30 -- target/nvmf_lvol.sh@49 -- # clone=8bd15402-8688-480a-8d74-4f6e10736bf2 00:07:12.146 18:05:30 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8bd15402-8688-480a-8d74-4f6e10736bf2 00:07:12.714 18:05:31 -- target/nvmf_lvol.sh@53 -- # wait 60439 00:07:20.834 Initializing NVMe Controllers 00:07:20.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:20.834 Controller IO queue size 128, less than required. 00:07:20.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:20.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:20.834 Initialization complete. Launching workers. 00:07:20.834 ======================================================== 00:07:20.834 Latency(us) 00:07:20.834 Device Information : IOPS MiB/s Average min max 00:07:20.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9844.50 38.46 13015.29 1821.70 52557.40 00:07:20.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9783.60 38.22 13094.47 2801.79 64858.91 00:07:20.834 ======================================================== 00:07:20.834 Total : 19628.09 76.67 13054.76 1821.70 64858.91 00:07:20.834 00:07:20.834 18:05:39 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.834 18:05:39 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 93baa631-b390-4dbb-a9cb-edf353631bd4 00:07:21.093 18:05:39 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bbc6f317-61ac-4589-94fc-457d210f4468 00:07:21.351 18:05:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:21.351 18:05:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:21.351 18:05:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:21.351 18:05:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:21.351 18:05:39 -- nvmf/common.sh@116 -- # sync 00:07:21.611 18:05:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:21.611 18:05:39 -- nvmf/common.sh@119 -- # set +e 00:07:21.611 18:05:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:21.611 18:05:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:21.611 rmmod nvme_tcp 00:07:21.611 rmmod nvme_fabrics 00:07:21.611 rmmod nvme_keyring 00:07:21.611 18:05:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:21.611 18:05:40 -- nvmf/common.sh@123 -- # set -e 00:07:21.611 18:05:40 -- nvmf/common.sh@124 -- # return 0 00:07:21.611 18:05:40 -- nvmf/common.sh@477 -- # '[' -n 60363 ']' 00:07:21.611 18:05:40 -- nvmf/common.sh@478 -- # killprocess 60363 00:07:21.611 18:05:40 -- common/autotest_common.sh@936 -- # '[' -z 60363 ']' 00:07:21.611 18:05:40 -- common/autotest_common.sh@940 -- # kill -0 60363 00:07:21.611 18:05:40 -- common/autotest_common.sh@941 -- # uname 00:07:21.611 18:05:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.611 18:05:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60363 00:07:21.611 killing process with pid 60363 00:07:21.611 18:05:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:21.611 18:05:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:21.611 18:05:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60363' 00:07:21.611 18:05:40 -- common/autotest_common.sh@955 -- # kill 60363 00:07:21.611 18:05:40 -- common/autotest_common.sh@960 -- # wait 60363 00:07:21.870 18:05:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:21.870 18:05:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:21.870 18:05:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:21.870 18:05:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.870 18:05:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:21.870 18:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.870 18:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.870 18:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.870 18:05:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:21.870 ************************************ 00:07:21.870 END TEST nvmf_lvol 00:07:21.870 ************************************ 00:07:21.870 00:07:21.870 real 0m15.695s 00:07:21.870 user 1m4.836s 00:07:21.870 sys 0m4.661s 00:07:21.870 18:05:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.870 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.870 18:05:40 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:21.870 18:05:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:21.870 18:05:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.870 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.870 ************************************ 00:07:21.870 START TEST nvmf_lvs_grow 00:07:21.870 ************************************ 00:07:21.870 18:05:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:22.130 * Looking for test storage... 00:07:22.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.130 18:05:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.130 18:05:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.130 18:05:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.130 18:05:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.130 18:05:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.130 18:05:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.130 18:05:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.130 18:05:40 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.130 18:05:40 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.130 18:05:40 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.130 18:05:40 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.130 18:05:40 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.130 18:05:40 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.130 18:05:40 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.130 18:05:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.130 18:05:40 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.130 18:05:40 -- scripts/common.sh@344 -- # : 1 00:07:22.130 18:05:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.130 18:05:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.130 18:05:40 -- scripts/common.sh@364 -- # decimal 1 00:07:22.130 18:05:40 -- scripts/common.sh@352 -- # local d=1 00:07:22.130 18:05:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.130 18:05:40 -- scripts/common.sh@354 -- # echo 1 00:07:22.130 18:05:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.130 18:05:40 -- scripts/common.sh@365 -- # decimal 2 00:07:22.130 18:05:40 -- scripts/common.sh@352 -- # local d=2 00:07:22.130 18:05:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.130 18:05:40 -- scripts/common.sh@354 -- # echo 2 00:07:22.130 18:05:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.130 18:05:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.130 18:05:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.130 18:05:40 -- scripts/common.sh@367 -- # return 0 00:07:22.130 18:05:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.130 18:05:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.130 --rc genhtml_branch_coverage=1 00:07:22.130 --rc genhtml_function_coverage=1 00:07:22.130 --rc genhtml_legend=1 00:07:22.130 --rc geninfo_all_blocks=1 00:07:22.130 --rc geninfo_unexecuted_blocks=1 00:07:22.130 00:07:22.130 ' 00:07:22.130 18:05:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.130 --rc genhtml_branch_coverage=1 00:07:22.130 --rc genhtml_function_coverage=1 00:07:22.130 --rc genhtml_legend=1 00:07:22.130 --rc geninfo_all_blocks=1 00:07:22.130 --rc geninfo_unexecuted_blocks=1 00:07:22.130 00:07:22.130 ' 00:07:22.130 18:05:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.130 --rc genhtml_branch_coverage=1 00:07:22.130 --rc genhtml_function_coverage=1 00:07:22.130 --rc genhtml_legend=1 00:07:22.130 --rc geninfo_all_blocks=1 00:07:22.130 --rc geninfo_unexecuted_blocks=1 00:07:22.130 00:07:22.130 ' 00:07:22.130 18:05:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.130 --rc genhtml_branch_coverage=1 00:07:22.130 --rc genhtml_function_coverage=1 00:07:22.130 --rc genhtml_legend=1 00:07:22.130 --rc geninfo_all_blocks=1 00:07:22.130 --rc geninfo_unexecuted_blocks=1 00:07:22.130 00:07:22.130 ' 00:07:22.130 18:05:40 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.130 18:05:40 -- nvmf/common.sh@7 -- # uname -s 00:07:22.130 18:05:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.130 18:05:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.130 18:05:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.130 18:05:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.130 18:05:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.130 18:05:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.130 18:05:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.130 18:05:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.130 18:05:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.130 18:05:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.130 18:05:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:22.130 18:05:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:07:22.130 18:05:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.130 18:05:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.130 18:05:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.130 18:05:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.130 18:05:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.130 18:05:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.130 18:05:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.130 18:05:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.130 18:05:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.130 18:05:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.130 18:05:40 -- paths/export.sh@5 -- # export PATH 00:07:22.130 18:05:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.130 18:05:40 -- nvmf/common.sh@46 -- # : 0 00:07:22.130 18:05:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:22.130 18:05:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:22.130 18:05:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:22.130 18:05:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.130 18:05:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.130 18:05:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:22.130 18:05:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:22.130 18:05:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:22.130 18:05:40 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.130 18:05:40 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:22.130 18:05:40 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:07:22.130 18:05:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:22.130 18:05:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.130 18:05:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:22.130 18:05:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:22.130 18:05:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:22.130 18:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.130 18:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.130 18:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.130 18:05:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:22.130 18:05:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:22.131 18:05:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:22.131 18:05:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:22.131 18:05:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:22.131 18:05:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:22.131 18:05:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.131 18:05:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.131 18:05:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:22.131 18:05:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:22.131 18:05:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.131 18:05:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.131 18:05:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.131 18:05:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.131 18:05:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.131 18:05:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.131 18:05:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.131 18:05:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.131 18:05:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:22.131 18:05:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:22.131 Cannot find device "nvmf_tgt_br" 00:07:22.131 18:05:40 -- nvmf/common.sh@154 -- # true 00:07:22.131 18:05:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.131 Cannot find device "nvmf_tgt_br2" 00:07:22.131 18:05:40 -- nvmf/common.sh@155 -- # true 00:07:22.131 18:05:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:22.131 18:05:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:22.131 Cannot find device "nvmf_tgt_br" 00:07:22.131 18:05:40 -- nvmf/common.sh@157 -- # true 00:07:22.131 18:05:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:22.131 Cannot find device "nvmf_tgt_br2" 00:07:22.131 18:05:40 -- nvmf/common.sh@158 -- # true 00:07:22.131 18:05:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:22.131 18:05:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:22.390 18:05:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.390 18:05:40 -- nvmf/common.sh@161 -- # true 00:07:22.390 18:05:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.391 18:05:40 -- nvmf/common.sh@162 -- # true 00:07:22.391 18:05:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.391 18:05:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.391 18:05:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.391 18:05:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.391 18:05:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.391 18:05:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.391 18:05:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.391 18:05:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:22.391 18:05:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:22.391 18:05:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:22.391 18:05:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:22.391 18:05:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:22.391 18:05:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:22.391 18:05:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.391 18:05:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.391 18:05:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.391 18:05:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:22.391 18:05:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:22.391 18:05:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.391 18:05:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.391 18:05:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.391 18:05:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.391 18:05:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.391 18:05:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:22.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:07:22.391 00:07:22.391 --- 10.0.0.2 ping statistics --- 00:07:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.391 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:22.391 18:05:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:22.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:22.391 00:07:22.391 --- 10.0.0.3 ping statistics --- 00:07:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.391 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:22.391 18:05:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:07:22.391 00:07:22.391 --- 10.0.0.1 ping statistics --- 00:07:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.391 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:22.391 18:05:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.391 18:05:40 -- nvmf/common.sh@421 -- # return 0 00:07:22.391 18:05:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:22.391 18:05:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.391 18:05:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:22.391 18:05:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:22.391 18:05:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.391 18:05:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:22.391 18:05:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:22.391 18:05:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:07:22.391 18:05:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:22.391 18:05:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.391 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:22.391 18:05:40 -- nvmf/common.sh@469 -- # nvmfpid=60770 00:07:22.391 18:05:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:22.391 18:05:40 -- nvmf/common.sh@470 -- # waitforlisten 60770 00:07:22.391 18:05:40 -- common/autotest_common.sh@829 -- # '[' -z 60770 ']' 00:07:22.391 18:05:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.391 18:05:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.391 18:05:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.391 18:05:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.391 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:07:22.650 [2024-11-18 18:05:41.044910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.650 [2024-11-18 18:05:41.045003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.650 [2024-11-18 18:05:41.187947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.650 [2024-11-18 18:05:41.250305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.650 [2024-11-18 18:05:41.250487] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.650 [2024-11-18 18:05:41.250500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.650 [2024-11-18 18:05:41.250509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.650 [2024-11-18 18:05:41.250533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.587 18:05:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.588 18:05:42 -- common/autotest_common.sh@862 -- # return 0 00:07:23.588 18:05:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:23.588 18:05:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.588 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.588 18:05:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.588 18:05:42 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:23.847 [2024-11-18 18:05:42.332373] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:07:23.847 18:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.847 18:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.847 18:05:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.847 ************************************ 00:07:23.847 START TEST lvs_grow_clean 00:07:23.847 ************************************ 00:07:23.847 18:05:42 -- common/autotest_common.sh@1114 -- # lvs_grow 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:23.847 18:05:42 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.106 18:05:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:24.106 18:05:42 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:24.676 18:05:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:24.676 18:05:42 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:24.676 18:05:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:24.676 18:05:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:24.676 18:05:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:24.676 18:05:43 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae lvol 150 00:07:24.936 18:05:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebeb6442-5b11-41ac-862a-d71766be96bc 00:07:24.936 18:05:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:24.936 18:05:43 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:25.455 [2024-11-18 18:05:43.796659] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:25.455 [2024-11-18 18:05:43.797024] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:25.455 true 00:07:25.455 18:05:43 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:25.455 18:05:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:25.715 18:05:44 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:25.715 18:05:44 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.975 18:05:44 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebeb6442-5b11-41ac-862a-d71766be96bc 00:07:26.234 18:05:44 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:26.493 [2024-11-18 18:05:44.886078] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.493 18:05:44 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.753 18:05:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60858 00:07:26.753 18:05:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.753 18:05:45 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:26.753 18:05:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60858 /var/tmp/bdevperf.sock 00:07:26.753 18:05:45 -- common/autotest_common.sh@829 -- # '[' -z 60858 ']' 00:07:26.753 18:05:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.753 18:05:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.753 18:05:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.753 18:05:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.753 18:05:45 -- common/autotest_common.sh@10 -- # set +x 00:07:26.753 [2024-11-18 18:05:45.215481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.753 [2024-11-18 18:05:45.215913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:07:27.011 [2024-11-18 18:05:45.356954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.011 [2024-11-18 18:05:45.429009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.579 18:05:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.579 18:05:46 -- common/autotest_common.sh@862 -- # return 0 00:07:27.579 18:05:46 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:28.148 Nvme0n1 00:07:28.148 18:05:46 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:28.148 [ 00:07:28.148 { 00:07:28.148 "name": "Nvme0n1", 00:07:28.148 "aliases": [ 00:07:28.148 "ebeb6442-5b11-41ac-862a-d71766be96bc" 00:07:28.148 ], 00:07:28.148 "product_name": "NVMe disk", 00:07:28.148 "block_size": 4096, 00:07:28.148 "num_blocks": 38912, 00:07:28.148 "uuid": "ebeb6442-5b11-41ac-862a-d71766be96bc", 00:07:28.148 "assigned_rate_limits": { 00:07:28.148 "rw_ios_per_sec": 0, 00:07:28.148 "rw_mbytes_per_sec": 0, 00:07:28.148 "r_mbytes_per_sec": 0, 00:07:28.148 "w_mbytes_per_sec": 0 00:07:28.148 }, 00:07:28.148 "claimed": false, 00:07:28.148 "zoned": false, 00:07:28.148 "supported_io_types": { 00:07:28.148 "read": true, 00:07:28.148 "write": true, 00:07:28.148 "unmap": true, 00:07:28.148 "write_zeroes": true, 00:07:28.148 "flush": true, 00:07:28.148 "reset": true, 00:07:28.148 "compare": true, 00:07:28.148 "compare_and_write": true, 00:07:28.148 "abort": true, 00:07:28.148 "nvme_admin": true, 00:07:28.148 "nvme_io": true 00:07:28.148 }, 00:07:28.148 "driver_specific": { 00:07:28.148 "nvme": [ 00:07:28.148 { 00:07:28.148 "trid": { 00:07:28.148 "trtype": "TCP", 00:07:28.148 "adrfam": "IPv4", 00:07:28.148 "traddr": "10.0.0.2", 00:07:28.148 "trsvcid": "4420", 00:07:28.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:28.148 }, 00:07:28.148 "ctrlr_data": { 00:07:28.148 "cntlid": 1, 00:07:28.148 "vendor_id": "0x8086", 00:07:28.148 "model_number": "SPDK bdev Controller", 00:07:28.148 "serial_number": "SPDK0", 00:07:28.148 "firmware_revision": "24.01.1", 00:07:28.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:28.148 "oacs": { 00:07:28.148 "security": 0, 00:07:28.148 "format": 0, 00:07:28.149 "firmware": 0, 00:07:28.149 "ns_manage": 0 00:07:28.149 }, 00:07:28.149 "multi_ctrlr": true, 00:07:28.149 "ana_reporting": false 00:07:28.149 }, 00:07:28.149 "vs": { 00:07:28.149 "nvme_version": "1.3" 00:07:28.149 }, 00:07:28.149 "ns_data": { 00:07:28.149 "id": 1, 00:07:28.149 "can_share": true 00:07:28.149 } 00:07:28.149 } 00:07:28.149 ], 00:07:28.149 "mp_policy": "active_passive" 00:07:28.149 } 00:07:28.149 } 00:07:28.149 ] 00:07:28.149 18:05:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60882 00:07:28.149 18:05:46 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:28.149 18:05:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:28.408 Running I/O for 10 seconds... 00:07:29.350 Latency(us) 00:07:29.350 [2024-11-18T18:05:47.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.350 [2024-11-18T18:05:47.955Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.351 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:29.351 [2024-11-18T18:05:47.955Z] =================================================================================================================== 00:07:29.351 [2024-11-18T18:05:47.955Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:29.351 00:07:30.288 18:05:48 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:30.288 [2024-11-18T18:05:48.892Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.288 Nvme0n1 : 2.00 6389.00 24.96 0.00 0.00 0.00 0.00 0.00 00:07:30.288 [2024-11-18T18:05:48.892Z] =================================================================================================================== 00:07:30.288 [2024-11-18T18:05:48.892Z] Total : 6389.00 24.96 0.00 0.00 0.00 0.00 0.00 00:07:30.288 00:07:30.547 true 00:07:30.547 18:05:49 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:30.547 18:05:49 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:30.808 18:05:49 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:30.808 18:05:49 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:30.808 18:05:49 -- target/nvmf_lvs_grow.sh@65 -- # wait 60882 00:07:31.375 [2024-11-18T18:05:49.979Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.375 Nvme0n1 : 3.00 6460.67 25.24 0.00 0.00 0.00 0.00 0.00 00:07:31.375 [2024-11-18T18:05:49.979Z] =================================================================================================================== 00:07:31.375 [2024-11-18T18:05:49.979Z] Total : 6460.67 25.24 0.00 0.00 0.00 0.00 0.00 00:07:31.375 00:07:32.312 [2024-11-18T18:05:50.916Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.312 Nvme0n1 : 4.00 6409.75 25.04 0.00 0.00 0.00 0.00 0.00 00:07:32.312 [2024-11-18T18:05:50.916Z] =================================================================================================================== 00:07:32.312 [2024-11-18T18:05:50.916Z] Total : 6409.75 25.04 0.00 0.00 0.00 0.00 0.00 00:07:32.312 00:07:33.251 [2024-11-18T18:05:51.855Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.251 Nvme0n1 : 5.00 6397.80 24.99 0.00 0.00 0.00 0.00 0.00 00:07:33.251 [2024-11-18T18:05:51.855Z] =================================================================================================================== 00:07:33.251 [2024-11-18T18:05:51.855Z] Total : 6397.80 24.99 0.00 0.00 0.00 0.00 0.00 00:07:33.251 00:07:34.628 [2024-11-18T18:05:53.232Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.628 Nvme0n1 : 6.00 6389.83 24.96 0.00 0.00 0.00 0.00 0.00 00:07:34.628 [2024-11-18T18:05:53.232Z] =================================================================================================================== 00:07:34.628 [2024-11-18T18:05:53.232Z] Total : 6389.83 24.96 0.00 0.00 0.00 0.00 0.00 00:07:34.628 00:07:35.566 [2024-11-18T18:05:54.170Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.566 Nvme0n1 : 7.00 6384.14 24.94 0.00 0.00 0.00 0.00 0.00 00:07:35.566 [2024-11-18T18:05:54.170Z] =================================================================================================================== 00:07:35.566 [2024-11-18T18:05:54.170Z] Total : 6384.14 24.94 0.00 0.00 0.00 0.00 0.00 00:07:35.566 00:07:36.269 [2024-11-18T18:05:54.873Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.269 Nvme0n1 : 8.00 6364.00 24.86 0.00 0.00 0.00 0.00 0.00 00:07:36.269 [2024-11-18T18:05:54.873Z] =================================================================================================================== 00:07:36.269 [2024-11-18T18:05:54.873Z] Total : 6364.00 24.86 0.00 0.00 0.00 0.00 0.00 00:07:36.269 00:07:37.647 [2024-11-18T18:05:56.251Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.647 Nvme0n1 : 9.00 6362.44 24.85 0.00 0.00 0.00 0.00 0.00 00:07:37.647 [2024-11-18T18:05:56.251Z] =================================================================================================================== 00:07:37.647 [2024-11-18T18:05:56.251Z] Total : 6362.44 24.85 0.00 0.00 0.00 0.00 0.00 00:07:37.647 00:07:38.584 [2024-11-18T18:05:57.189Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.585 Nvme0n1 : 10.00 6348.50 24.80 0.00 0.00 0.00 0.00 0.00 00:07:38.585 [2024-11-18T18:05:57.189Z] =================================================================================================================== 00:07:38.585 [2024-11-18T18:05:57.189Z] Total : 6348.50 24.80 0.00 0.00 0.00 0.00 0.00 00:07:38.585 00:07:38.585 00:07:38.585 Latency(us) 00:07:38.585 [2024-11-18T18:05:57.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.585 [2024-11-18T18:05:57.189Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.585 Nvme0n1 : 10.00 6359.07 24.84 0.00 0.00 20123.92 8102.63 76260.07 00:07:38.585 [2024-11-18T18:05:57.189Z] =================================================================================================================== 00:07:38.585 [2024-11-18T18:05:57.189Z] Total : 6359.07 24.84 0.00 0.00 20123.92 8102.63 76260.07 00:07:38.585 0 00:07:38.585 18:05:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60858 00:07:38.585 18:05:56 -- common/autotest_common.sh@936 -- # '[' -z 60858 ']' 00:07:38.585 18:05:56 -- common/autotest_common.sh@940 -- # kill -0 60858 00:07:38.585 18:05:56 -- common/autotest_common.sh@941 -- # uname 00:07:38.585 18:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.585 18:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60858 00:07:38.585 killing process with pid 60858 00:07:38.585 Received shutdown signal, test time was about 10.000000 seconds 00:07:38.585 00:07:38.585 Latency(us) 00:07:38.585 [2024-11-18T18:05:57.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.585 [2024-11-18T18:05:57.189Z] =================================================================================================================== 00:07:38.585 [2024-11-18T18:05:57.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:38.585 18:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:38.585 18:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:38.585 18:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60858' 00:07:38.585 18:05:56 -- common/autotest_common.sh@955 -- # kill 60858 00:07:38.585 18:05:56 -- common/autotest_common.sh@960 -- # wait 60858 00:07:38.585 18:05:57 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.844 18:05:57 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:38.844 18:05:57 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:07:39.103 18:05:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:07:39.103 18:05:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:07:39.103 18:05:57 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.671 [2024-11-18 18:05:57.963201] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:39.671 18:05:57 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:39.671 18:05:57 -- common/autotest_common.sh@650 -- # local es=0 00:07:39.671 18:05:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:39.671 18:05:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:39.671 18:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.671 18:05:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:39.671 18:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.671 18:05:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:39.671 18:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.671 18:05:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:39.671 18:05:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:39.671 18:05:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:39.671 request: 00:07:39.671 { 00:07:39.671 "uuid": "3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae", 00:07:39.671 "method": "bdev_lvol_get_lvstores", 00:07:39.671 "req_id": 1 00:07:39.671 } 00:07:39.671 Got JSON-RPC error response 00:07:39.671 response: 00:07:39.671 { 00:07:39.671 "code": -19, 00:07:39.671 "message": "No such device" 00:07:39.671 } 00:07:39.930 18:05:58 -- common/autotest_common.sh@653 -- # es=1 00:07:39.930 18:05:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.930 18:05:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:39.930 18:05:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.930 18:05:58 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.190 aio_bdev 00:07:40.190 18:05:58 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ebeb6442-5b11-41ac-862a-d71766be96bc 00:07:40.190 18:05:58 -- common/autotest_common.sh@897 -- # local bdev_name=ebeb6442-5b11-41ac-862a-d71766be96bc 00:07:40.190 18:05:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:40.190 18:05:58 -- common/autotest_common.sh@899 -- # local i 00:07:40.190 18:05:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:40.190 18:05:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:40.190 18:05:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.449 18:05:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebeb6442-5b11-41ac-862a-d71766be96bc -t 2000 00:07:40.449 [ 00:07:40.449 { 00:07:40.449 "name": "ebeb6442-5b11-41ac-862a-d71766be96bc", 00:07:40.449 "aliases": [ 00:07:40.449 "lvs/lvol" 00:07:40.449 ], 00:07:40.449 "product_name": "Logical Volume", 00:07:40.449 "block_size": 4096, 00:07:40.449 "num_blocks": 38912, 00:07:40.449 "uuid": "ebeb6442-5b11-41ac-862a-d71766be96bc", 00:07:40.449 "assigned_rate_limits": { 00:07:40.449 "rw_ios_per_sec": 0, 00:07:40.449 "rw_mbytes_per_sec": 0, 00:07:40.449 "r_mbytes_per_sec": 0, 00:07:40.449 "w_mbytes_per_sec": 0 00:07:40.449 }, 00:07:40.449 "claimed": false, 00:07:40.449 "zoned": false, 00:07:40.449 "supported_io_types": { 00:07:40.449 "read": true, 00:07:40.449 "write": true, 00:07:40.449 "unmap": true, 00:07:40.449 "write_zeroes": true, 00:07:40.449 "flush": false, 00:07:40.449 "reset": true, 00:07:40.449 "compare": false, 00:07:40.449 "compare_and_write": false, 00:07:40.449 "abort": false, 00:07:40.449 "nvme_admin": false, 00:07:40.449 "nvme_io": false 00:07:40.449 }, 00:07:40.449 "driver_specific": { 00:07:40.449 "lvol": { 00:07:40.449 "lvol_store_uuid": "3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae", 00:07:40.449 "base_bdev": "aio_bdev", 00:07:40.449 "thin_provision": false, 00:07:40.449 "snapshot": false, 00:07:40.449 "clone": false, 00:07:40.449 "esnap_clone": false 00:07:40.449 } 00:07:40.449 } 00:07:40.449 } 00:07:40.449 ] 00:07:40.709 18:05:59 -- common/autotest_common.sh@905 -- # return 0 00:07:40.709 18:05:59 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:40.709 18:05:59 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:07:40.968 18:05:59 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:07:40.968 18:05:59 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:07:40.968 18:05:59 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:41.226 18:05:59 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:07:41.227 18:05:59 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ebeb6442-5b11-41ac-862a-d71766be96bc 00:07:41.486 18:05:59 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b2ab4c0-5d57-4dc4-86aa-0fcae35f10ae 00:07:41.745 18:06:00 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.004 18:06:00 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.263 ************************************ 00:07:42.263 END TEST lvs_grow_clean 00:07:42.263 ************************************ 00:07:42.263 00:07:42.263 real 0m18.353s 00:07:42.263 user 0m17.472s 00:07:42.263 sys 0m2.352s 00:07:42.263 18:06:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.263 18:06:00 -- common/autotest_common.sh@10 -- # set +x 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:42.263 18:06:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.263 18:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.263 18:06:00 -- common/autotest_common.sh@10 -- # set +x 00:07:42.263 ************************************ 00:07:42.263 START TEST lvs_grow_dirty 00:07:42.263 ************************************ 00:07:42.263 18:06:00 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:42.263 18:06:00 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.523 18:06:01 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.523 18:06:01 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.782 18:06:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0961c87e-7227-466c-ab24-049823e78f85 00:07:42.782 18:06:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.782 18:06:01 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:43.041 18:06:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:43.041 18:06:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:43.041 18:06:01 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0961c87e-7227-466c-ab24-049823e78f85 lvol 150 00:07:43.300 18:06:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9440f20c-98d4-414f-8a3d-b959afdf68a8 00:07:43.300 18:06:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:43.300 18:06:01 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.559 [2024-11-18 18:06:02.008321] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.559 [2024-11-18 18:06:02.008411] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.559 true 00:07:43.559 18:06:02 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:43.559 18:06:02 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.819 18:06:02 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.819 18:06:02 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.078 18:06:02 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9440f20c-98d4-414f-8a3d-b959afdf68a8 00:07:44.337 18:06:02 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:44.596 18:06:02 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.856 18:06:03 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.856 18:06:03 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61127 00:07:44.856 18:06:03 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.856 18:06:03 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61127 /var/tmp/bdevperf.sock 00:07:44.856 18:06:03 -- common/autotest_common.sh@829 -- # '[' -z 61127 ']' 00:07:44.856 18:06:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.856 18:06:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.856 18:06:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.856 18:06:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.856 18:06:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.856 [2024-11-18 18:06:03.244840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.856 [2024-11-18 18:06:03.245104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61127 ] 00:07:44.856 [2024-11-18 18:06:03.381607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.856 [2024-11-18 18:06:03.451904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.791 18:06:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.792 18:06:04 -- common/autotest_common.sh@862 -- # return 0 00:07:45.792 18:06:04 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:46.050 Nvme0n1 00:07:46.050 18:06:04 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:46.310 [ 00:07:46.310 { 00:07:46.310 "name": "Nvme0n1", 00:07:46.310 "aliases": [ 00:07:46.310 "9440f20c-98d4-414f-8a3d-b959afdf68a8" 00:07:46.310 ], 00:07:46.310 "product_name": "NVMe disk", 00:07:46.310 "block_size": 4096, 00:07:46.310 "num_blocks": 38912, 00:07:46.310 "uuid": "9440f20c-98d4-414f-8a3d-b959afdf68a8", 00:07:46.310 "assigned_rate_limits": { 00:07:46.310 "rw_ios_per_sec": 0, 00:07:46.310 "rw_mbytes_per_sec": 0, 00:07:46.310 "r_mbytes_per_sec": 0, 00:07:46.310 "w_mbytes_per_sec": 0 00:07:46.310 }, 00:07:46.310 "claimed": false, 00:07:46.310 "zoned": false, 00:07:46.310 "supported_io_types": { 00:07:46.310 "read": true, 00:07:46.310 "write": true, 00:07:46.310 "unmap": true, 00:07:46.310 "write_zeroes": true, 00:07:46.310 "flush": true, 00:07:46.310 "reset": true, 00:07:46.310 "compare": true, 00:07:46.310 "compare_and_write": true, 00:07:46.310 "abort": true, 00:07:46.310 "nvme_admin": true, 00:07:46.310 "nvme_io": true 00:07:46.310 }, 00:07:46.310 "driver_specific": { 00:07:46.310 "nvme": [ 00:07:46.310 { 00:07:46.310 "trid": { 00:07:46.310 "trtype": "TCP", 00:07:46.310 "adrfam": "IPv4", 00:07:46.310 "traddr": "10.0.0.2", 00:07:46.310 "trsvcid": "4420", 00:07:46.310 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:46.310 }, 00:07:46.310 "ctrlr_data": { 00:07:46.310 "cntlid": 1, 00:07:46.310 "vendor_id": "0x8086", 00:07:46.310 "model_number": "SPDK bdev Controller", 00:07:46.310 "serial_number": "SPDK0", 00:07:46.310 "firmware_revision": "24.01.1", 00:07:46.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.310 "oacs": { 00:07:46.310 "security": 0, 00:07:46.310 "format": 0, 00:07:46.310 "firmware": 0, 00:07:46.310 "ns_manage": 0 00:07:46.310 }, 00:07:46.310 "multi_ctrlr": true, 00:07:46.310 "ana_reporting": false 00:07:46.310 }, 00:07:46.310 "vs": { 00:07:46.310 "nvme_version": "1.3" 00:07:46.310 }, 00:07:46.310 "ns_data": { 00:07:46.310 "id": 1, 00:07:46.310 "can_share": true 00:07:46.310 } 00:07:46.310 } 00:07:46.310 ], 00:07:46.310 "mp_policy": "active_passive" 00:07:46.310 } 00:07:46.310 } 00:07:46.310 ] 00:07:46.310 18:06:04 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61145 00:07:46.310 18:06:04 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:46.310 18:06:04 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:46.310 Running I/O for 10 seconds... 00:07:47.687 Latency(us) 00:07:47.687 [2024-11-18T18:06:06.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.687 [2024-11-18T18:06:06.291Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.687 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:47.687 [2024-11-18T18:06:06.291Z] =================================================================================================================== 00:07:47.687 [2024-11-18T18:06:06.291Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:47.687 00:07:48.254 18:06:06 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:48.513 [2024-11-18T18:06:07.117Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.513 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:48.513 [2024-11-18T18:06:07.117Z] =================================================================================================================== 00:07:48.513 [2024-11-18T18:06:07.117Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:48.513 00:07:48.513 true 00:07:48.771 18:06:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:48.771 18:06:07 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:49.030 18:06:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:49.030 18:06:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:49.030 18:06:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 61145 00:07:49.598 [2024-11-18T18:06:08.202Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.598 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:49.598 [2024-11-18T18:06:08.202Z] =================================================================================================================== 00:07:49.598 [2024-11-18T18:06:08.202Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:49.598 00:07:50.535 [2024-11-18T18:06:09.139Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.535 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:50.535 [2024-11-18T18:06:09.139Z] =================================================================================================================== 00:07:50.535 [2024-11-18T18:06:09.139Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:50.535 00:07:51.472 [2024-11-18T18:06:10.076Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.472 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:07:51.472 [2024-11-18T18:06:10.076Z] =================================================================================================================== 00:07:51.472 [2024-11-18T18:06:10.076Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:07:51.472 00:07:52.409 [2024-11-18T18:06:11.013Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.409 Nvme0n1 : 6.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:52.409 [2024-11-18T18:06:11.013Z] =================================================================================================================== 00:07:52.409 [2024-11-18T18:06:11.013Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:07:52.409 00:07:53.344 [2024-11-18T18:06:11.948Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.344 Nvme0n1 : 7.00 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:07:53.344 [2024-11-18T18:06:11.948Z] =================================================================================================================== 00:07:53.344 [2024-11-18T18:06:11.948Z] Total : 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:07:53.344 00:07:54.733 [2024-11-18T18:06:13.337Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.733 Nvme0n1 : 8.00 6333.38 24.74 0.00 0.00 0.00 0.00 0.00 00:07:54.733 [2024-11-18T18:06:13.337Z] =================================================================================================================== 00:07:54.733 [2024-11-18T18:06:13.337Z] Total : 6333.38 24.74 0.00 0.00 0.00 0.00 0.00 00:07:54.733 00:07:55.342 [2024-11-18T18:06:13.946Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.342 Nvme0n1 : 9.00 6321.11 24.69 0.00 0.00 0.00 0.00 0.00 00:07:55.342 [2024-11-18T18:06:13.946Z] =================================================================================================================== 00:07:55.342 [2024-11-18T18:06:13.946Z] Total : 6321.11 24.69 0.00 0.00 0.00 0.00 0.00 00:07:55.342 00:07:56.725 [2024-11-18T18:06:15.329Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.725 Nvme0n1 : 10.00 6285.90 24.55 0.00 0.00 0.00 0.00 0.00 00:07:56.725 [2024-11-18T18:06:15.329Z] =================================================================================================================== 00:07:56.725 [2024-11-18T18:06:15.329Z] Total : 6285.90 24.55 0.00 0.00 0.00 0.00 0.00 00:07:56.725 00:07:56.725 00:07:56.725 Latency(us) 00:07:56.725 [2024-11-18T18:06:15.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.725 [2024-11-18T18:06:15.329Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.725 Nvme0n1 : 10.01 6292.49 24.58 0.00 0.00 20336.76 15847.80 154426.65 00:07:56.725 [2024-11-18T18:06:15.329Z] =================================================================================================================== 00:07:56.725 [2024-11-18T18:06:15.329Z] Total : 6292.49 24.58 0.00 0.00 20336.76 15847.80 154426.65 00:07:56.725 0 00:07:56.725 18:06:14 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61127 00:07:56.725 18:06:14 -- common/autotest_common.sh@936 -- # '[' -z 61127 ']' 00:07:56.725 18:06:14 -- common/autotest_common.sh@940 -- # kill -0 61127 00:07:56.725 18:06:14 -- common/autotest_common.sh@941 -- # uname 00:07:56.725 18:06:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.725 18:06:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61127 00:07:56.725 killing process with pid 61127 00:07:56.725 Received shutdown signal, test time was about 10.000000 seconds 00:07:56.725 00:07:56.725 Latency(us) 00:07:56.725 [2024-11-18T18:06:15.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.725 [2024-11-18T18:06:15.329Z] =================================================================================================================== 00:07:56.725 [2024-11-18T18:06:15.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:56.725 18:06:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:56.725 18:06:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:56.725 18:06:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61127' 00:07:56.725 18:06:14 -- common/autotest_common.sh@955 -- # kill 61127 00:07:56.725 18:06:14 -- common/autotest_common.sh@960 -- # wait 61127 00:07:56.725 18:06:15 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.984 18:06:15 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:56.984 18:06:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60770 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@74 -- # wait 60770 00:07:57.243 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60770 Killed "${NVMF_APP[@]}" "$@" 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@74 -- # true 00:07:57.243 18:06:15 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:07:57.243 18:06:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:57.243 18:06:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.243 18:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.243 18:06:15 -- nvmf/common.sh@469 -- # nvmfpid=61282 00:07:57.243 18:06:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:57.243 18:06:15 -- nvmf/common.sh@470 -- # waitforlisten 61282 00:07:57.243 18:06:15 -- common/autotest_common.sh@829 -- # '[' -z 61282 ']' 00:07:57.243 18:06:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.244 18:06:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.244 18:06:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.244 18:06:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.244 18:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.244 [2024-11-18 18:06:15.795640] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.244 [2024-11-18 18:06:15.795764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.503 [2024-11-18 18:06:15.938131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.503 [2024-11-18 18:06:15.986961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.503 [2024-11-18 18:06:15.987107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.503 [2024-11-18 18:06:15.987120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.503 [2024-11-18 18:06:15.987128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.503 [2024-11-18 18:06:15.987158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.438 18:06:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.438 18:06:16 -- common/autotest_common.sh@862 -- # return 0 00:07:58.438 18:06:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:58.438 18:06:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.438 18:06:16 -- common/autotest_common.sh@10 -- # set +x 00:07:58.438 18:06:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.438 18:06:16 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.438 [2024-11-18 18:06:16.934148] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:58.438 [2024-11-18 18:06:16.934466] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:58.438 [2024-11-18 18:06:16.934720] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:58.438 18:06:16 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:07:58.438 18:06:16 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 9440f20c-98d4-414f-8a3d-b959afdf68a8 00:07:58.438 18:06:16 -- common/autotest_common.sh@897 -- # local bdev_name=9440f20c-98d4-414f-8a3d-b959afdf68a8 00:07:58.438 18:06:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:58.438 18:06:16 -- common/autotest_common.sh@899 -- # local i 00:07:58.438 18:06:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:58.438 18:06:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:58.438 18:06:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.697 18:06:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9440f20c-98d4-414f-8a3d-b959afdf68a8 -t 2000 00:07:58.956 [ 00:07:58.956 { 00:07:58.956 "name": "9440f20c-98d4-414f-8a3d-b959afdf68a8", 00:07:58.956 "aliases": [ 00:07:58.956 "lvs/lvol" 00:07:58.956 ], 00:07:58.956 "product_name": "Logical Volume", 00:07:58.956 "block_size": 4096, 00:07:58.956 "num_blocks": 38912, 00:07:58.956 "uuid": "9440f20c-98d4-414f-8a3d-b959afdf68a8", 00:07:58.956 "assigned_rate_limits": { 00:07:58.956 "rw_ios_per_sec": 0, 00:07:58.956 "rw_mbytes_per_sec": 0, 00:07:58.956 "r_mbytes_per_sec": 0, 00:07:58.956 "w_mbytes_per_sec": 0 00:07:58.956 }, 00:07:58.956 "claimed": false, 00:07:58.956 "zoned": false, 00:07:58.956 "supported_io_types": { 00:07:58.956 "read": true, 00:07:58.956 "write": true, 00:07:58.956 "unmap": true, 00:07:58.956 "write_zeroes": true, 00:07:58.956 "flush": false, 00:07:58.956 "reset": true, 00:07:58.956 "compare": false, 00:07:58.956 "compare_and_write": false, 00:07:58.956 "abort": false, 00:07:58.956 "nvme_admin": false, 00:07:58.956 "nvme_io": false 00:07:58.956 }, 00:07:58.956 "driver_specific": { 00:07:58.956 "lvol": { 00:07:58.956 "lvol_store_uuid": "0961c87e-7227-466c-ab24-049823e78f85", 00:07:58.956 "base_bdev": "aio_bdev", 00:07:58.956 "thin_provision": false, 00:07:58.956 "snapshot": false, 00:07:58.956 "clone": false, 00:07:58.956 "esnap_clone": false 00:07:58.956 } 00:07:58.956 } 00:07:58.956 } 00:07:58.956 ] 00:07:58.956 18:06:17 -- common/autotest_common.sh@905 -- # return 0 00:07:58.956 18:06:17 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:58.956 18:06:17 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:07:59.215 18:06:17 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:07:59.215 18:06:17 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:59.215 18:06:17 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:07:59.473 18:06:18 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:07:59.473 18:06:18 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.732 [2024-11-18 18:06:18.247923] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.732 18:06:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:59.732 18:06:18 -- common/autotest_common.sh@650 -- # local es=0 00:07:59.732 18:06:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:59.732 18:06:18 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.732 18:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.732 18:06:18 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.732 18:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.732 18:06:18 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.732 18:06:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.732 18:06:18 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.732 18:06:18 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.732 18:06:18 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:07:59.991 request: 00:07:59.991 { 00:07:59.991 "uuid": "0961c87e-7227-466c-ab24-049823e78f85", 00:07:59.991 "method": "bdev_lvol_get_lvstores", 00:07:59.991 "req_id": 1 00:07:59.991 } 00:07:59.991 Got JSON-RPC error response 00:07:59.991 response: 00:07:59.991 { 00:07:59.991 "code": -19, 00:07:59.991 "message": "No such device" 00:07:59.991 } 00:07:59.991 18:06:18 -- common/autotest_common.sh@653 -- # es=1 00:07:59.991 18:06:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.991 18:06:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.991 18:06:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.991 18:06:18 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.250 aio_bdev 00:08:00.250 18:06:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9440f20c-98d4-414f-8a3d-b959afdf68a8 00:08:00.250 18:06:18 -- common/autotest_common.sh@897 -- # local bdev_name=9440f20c-98d4-414f-8a3d-b959afdf68a8 00:08:00.250 18:06:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:00.250 18:06:18 -- common/autotest_common.sh@899 -- # local i 00:08:00.250 18:06:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:00.250 18:06:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:00.250 18:06:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.509 18:06:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9440f20c-98d4-414f-8a3d-b959afdf68a8 -t 2000 00:08:00.768 [ 00:08:00.768 { 00:08:00.768 "name": "9440f20c-98d4-414f-8a3d-b959afdf68a8", 00:08:00.768 "aliases": [ 00:08:00.768 "lvs/lvol" 00:08:00.768 ], 00:08:00.768 "product_name": "Logical Volume", 00:08:00.768 "block_size": 4096, 00:08:00.768 "num_blocks": 38912, 00:08:00.768 "uuid": "9440f20c-98d4-414f-8a3d-b959afdf68a8", 00:08:00.768 "assigned_rate_limits": { 00:08:00.768 "rw_ios_per_sec": 0, 00:08:00.768 "rw_mbytes_per_sec": 0, 00:08:00.768 "r_mbytes_per_sec": 0, 00:08:00.768 "w_mbytes_per_sec": 0 00:08:00.768 }, 00:08:00.768 "claimed": false, 00:08:00.768 "zoned": false, 00:08:00.768 "supported_io_types": { 00:08:00.768 "read": true, 00:08:00.768 "write": true, 00:08:00.768 "unmap": true, 00:08:00.768 "write_zeroes": true, 00:08:00.768 "flush": false, 00:08:00.768 "reset": true, 00:08:00.768 "compare": false, 00:08:00.768 "compare_and_write": false, 00:08:00.768 "abort": false, 00:08:00.768 "nvme_admin": false, 00:08:00.768 "nvme_io": false 00:08:00.768 }, 00:08:00.768 "driver_specific": { 00:08:00.768 "lvol": { 00:08:00.768 "lvol_store_uuid": "0961c87e-7227-466c-ab24-049823e78f85", 00:08:00.768 "base_bdev": "aio_bdev", 00:08:00.768 "thin_provision": false, 00:08:00.768 "snapshot": false, 00:08:00.768 "clone": false, 00:08:00.768 "esnap_clone": false 00:08:00.768 } 00:08:00.768 } 00:08:00.768 } 00:08:00.768 ] 00:08:00.768 18:06:19 -- common/autotest_common.sh@905 -- # return 0 00:08:00.768 18:06:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:00.768 18:06:19 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:08:01.026 18:06:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:01.026 18:06:19 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0961c87e-7227-466c-ab24-049823e78f85 00:08:01.026 18:06:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:01.285 18:06:19 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:01.285 18:06:19 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9440f20c-98d4-414f-8a3d-b959afdf68a8 00:08:01.545 18:06:19 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0961c87e-7227-466c-ab24-049823e78f85 00:08:01.804 18:06:20 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.063 18:06:20 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:02.323 00:08:02.323 real 0m20.051s 00:08:02.323 user 0m40.622s 00:08:02.323 sys 0m9.335s 00:08:02.323 18:06:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.323 18:06:20 -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 ************************************ 00:08:02.323 END TEST lvs_grow_dirty 00:08:02.323 ************************************ 00:08:02.323 18:06:20 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:02.323 18:06:20 -- common/autotest_common.sh@806 -- # type=--id 00:08:02.323 18:06:20 -- common/autotest_common.sh@807 -- # id=0 00:08:02.323 18:06:20 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:02.323 18:06:20 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:02.323 18:06:20 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:02.323 18:06:20 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:02.323 18:06:20 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:02.323 18:06:20 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:02.323 nvmf_trace.0 00:08:02.323 18:06:20 -- common/autotest_common.sh@821 -- # return 0 00:08:02.323 18:06:20 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:02.323 18:06:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:02.323 18:06:20 -- nvmf/common.sh@116 -- # sync 00:08:03.261 18:06:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:03.261 18:06:21 -- nvmf/common.sh@119 -- # set +e 00:08:03.261 18:06:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:03.261 18:06:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:03.261 rmmod nvme_tcp 00:08:03.261 rmmod nvme_fabrics 00:08:03.261 rmmod nvme_keyring 00:08:03.261 18:06:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:03.261 18:06:21 -- nvmf/common.sh@123 -- # set -e 00:08:03.261 18:06:21 -- nvmf/common.sh@124 -- # return 0 00:08:03.261 18:06:21 -- nvmf/common.sh@477 -- # '[' -n 61282 ']' 00:08:03.261 18:06:21 -- nvmf/common.sh@478 -- # killprocess 61282 00:08:03.261 18:06:21 -- common/autotest_common.sh@936 -- # '[' -z 61282 ']' 00:08:03.261 18:06:21 -- common/autotest_common.sh@940 -- # kill -0 61282 00:08:03.261 18:06:21 -- common/autotest_common.sh@941 -- # uname 00:08:03.261 18:06:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:03.261 18:06:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61282 00:08:03.261 18:06:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:03.261 18:06:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:03.261 killing process with pid 61282 00:08:03.261 18:06:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61282' 00:08:03.261 18:06:21 -- common/autotest_common.sh@955 -- # kill 61282 00:08:03.261 18:06:21 -- common/autotest_common.sh@960 -- # wait 61282 00:08:03.261 18:06:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:03.261 18:06:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:03.261 18:06:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:03.261 18:06:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.261 18:06:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:03.261 18:06:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.261 18:06:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.261 18:06:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.261 18:06:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:03.261 00:08:03.261 real 0m41.428s 00:08:03.261 user 1m4.868s 00:08:03.261 sys 0m12.769s 00:08:03.261 18:06:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.261 ************************************ 00:08:03.261 END TEST nvmf_lvs_grow 00:08:03.261 ************************************ 00:08:03.261 18:06:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.521 18:06:21 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:03.521 18:06:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:03.521 18:06:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.521 18:06:21 -- common/autotest_common.sh@10 -- # set +x 00:08:03.521 ************************************ 00:08:03.521 START TEST nvmf_bdev_io_wait 00:08:03.521 ************************************ 00:08:03.521 18:06:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:03.521 * Looking for test storage... 00:08:03.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.521 18:06:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.521 18:06:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.521 18:06:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.521 18:06:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.521 18:06:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.521 18:06:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.521 18:06:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.521 18:06:22 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.521 18:06:22 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.521 18:06:22 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.521 18:06:22 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.521 18:06:22 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.521 18:06:22 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.521 18:06:22 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.521 18:06:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.521 18:06:22 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.521 18:06:22 -- scripts/common.sh@344 -- # : 1 00:08:03.521 18:06:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.521 18:06:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.521 18:06:22 -- scripts/common.sh@364 -- # decimal 1 00:08:03.521 18:06:22 -- scripts/common.sh@352 -- # local d=1 00:08:03.521 18:06:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.521 18:06:22 -- scripts/common.sh@354 -- # echo 1 00:08:03.521 18:06:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.521 18:06:22 -- scripts/common.sh@365 -- # decimal 2 00:08:03.521 18:06:22 -- scripts/common.sh@352 -- # local d=2 00:08:03.521 18:06:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.521 18:06:22 -- scripts/common.sh@354 -- # echo 2 00:08:03.521 18:06:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.521 18:06:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.521 18:06:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.521 18:06:22 -- scripts/common.sh@367 -- # return 0 00:08:03.521 18:06:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.521 18:06:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.521 --rc genhtml_branch_coverage=1 00:08:03.521 --rc genhtml_function_coverage=1 00:08:03.521 --rc genhtml_legend=1 00:08:03.521 --rc geninfo_all_blocks=1 00:08:03.521 --rc geninfo_unexecuted_blocks=1 00:08:03.521 00:08:03.521 ' 00:08:03.521 18:06:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.521 --rc genhtml_branch_coverage=1 00:08:03.521 --rc genhtml_function_coverage=1 00:08:03.521 --rc genhtml_legend=1 00:08:03.521 --rc geninfo_all_blocks=1 00:08:03.521 --rc geninfo_unexecuted_blocks=1 00:08:03.521 00:08:03.521 ' 00:08:03.521 18:06:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.521 --rc genhtml_branch_coverage=1 00:08:03.521 --rc genhtml_function_coverage=1 00:08:03.521 --rc genhtml_legend=1 00:08:03.521 --rc geninfo_all_blocks=1 00:08:03.521 --rc geninfo_unexecuted_blocks=1 00:08:03.521 00:08:03.521 ' 00:08:03.521 18:06:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.521 --rc genhtml_branch_coverage=1 00:08:03.521 --rc genhtml_function_coverage=1 00:08:03.521 --rc genhtml_legend=1 00:08:03.521 --rc geninfo_all_blocks=1 00:08:03.521 --rc geninfo_unexecuted_blocks=1 00:08:03.521 00:08:03.521 ' 00:08:03.521 18:06:22 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.521 18:06:22 -- nvmf/common.sh@7 -- # uname -s 00:08:03.521 18:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.521 18:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.521 18:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.521 18:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.521 18:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.521 18:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.521 18:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.521 18:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.521 18:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.521 18:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.521 18:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:03.521 18:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:03.521 18:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.521 18:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.521 18:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.521 18:06:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.521 18:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.521 18:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.521 18:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.521 18:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.521 18:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.521 18:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.521 18:06:22 -- paths/export.sh@5 -- # export PATH 00:08:03.522 18:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.522 18:06:22 -- nvmf/common.sh@46 -- # : 0 00:08:03.522 18:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.522 18:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.522 18:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.522 18:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.522 18:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.522 18:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.522 18:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.522 18:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.522 18:06:22 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.522 18:06:22 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.522 18:06:22 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:03.522 18:06:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:03.522 18:06:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.522 18:06:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:03.522 18:06:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:03.522 18:06:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:03.522 18:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.522 18:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.522 18:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.522 18:06:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:03.522 18:06:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:03.522 18:06:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:03.522 18:06:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:03.522 18:06:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:03.522 18:06:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:03.522 18:06:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.522 18:06:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.522 18:06:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:03.522 18:06:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:03.522 18:06:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.522 18:06:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.522 18:06:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.522 18:06:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.522 18:06:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.522 18:06:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.522 18:06:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.522 18:06:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.522 18:06:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:03.522 18:06:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:03.522 Cannot find device "nvmf_tgt_br" 00:08:03.522 18:06:22 -- nvmf/common.sh@154 -- # true 00:08:03.522 18:06:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.781 Cannot find device "nvmf_tgt_br2" 00:08:03.781 18:06:22 -- nvmf/common.sh@155 -- # true 00:08:03.781 18:06:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:03.781 18:06:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:03.781 Cannot find device "nvmf_tgt_br" 00:08:03.781 18:06:22 -- nvmf/common.sh@157 -- # true 00:08:03.781 18:06:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:03.781 Cannot find device "nvmf_tgt_br2" 00:08:03.781 18:06:22 -- nvmf/common.sh@158 -- # true 00:08:03.781 18:06:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:03.781 18:06:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:03.781 18:06:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.781 18:06:22 -- nvmf/common.sh@161 -- # true 00:08:03.781 18:06:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.781 18:06:22 -- nvmf/common.sh@162 -- # true 00:08:03.781 18:06:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.781 18:06:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.781 18:06:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.781 18:06:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.781 18:06:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.781 18:06:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.781 18:06:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.781 18:06:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:03.781 18:06:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:03.781 18:06:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:03.781 18:06:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:03.781 18:06:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:03.781 18:06:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:03.781 18:06:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.781 18:06:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.781 18:06:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.781 18:06:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:03.781 18:06:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:03.781 18:06:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:04.040 18:06:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:04.040 18:06:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:04.041 18:06:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:04.041 18:06:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:04.041 18:06:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:04.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:04.041 00:08:04.041 --- 10.0.0.2 ping statistics --- 00:08:04.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.041 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:04.041 18:06:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:04.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:04.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:08:04.041 00:08:04.041 --- 10.0.0.3 ping statistics --- 00:08:04.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.041 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:04.041 18:06:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:04.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:04.041 00:08:04.041 --- 10.0.0.1 ping statistics --- 00:08:04.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.041 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:04.041 18:06:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.041 18:06:22 -- nvmf/common.sh@421 -- # return 0 00:08:04.041 18:06:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.041 18:06:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.041 18:06:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:04.041 18:06:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:04.041 18:06:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.041 18:06:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:04.041 18:06:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:04.041 18:06:22 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:04.041 18:06:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.041 18:06:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.041 18:06:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.041 18:06:22 -- nvmf/common.sh@469 -- # nvmfpid=61606 00:08:04.041 18:06:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:04.041 18:06:22 -- nvmf/common.sh@470 -- # waitforlisten 61606 00:08:04.041 18:06:22 -- common/autotest_common.sh@829 -- # '[' -z 61606 ']' 00:08:04.041 18:06:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.041 18:06:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.041 18:06:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.041 18:06:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.041 18:06:22 -- common/autotest_common.sh@10 -- # set +x 00:08:04.041 [2024-11-18 18:06:22.524966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.041 [2024-11-18 18:06:22.525327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.300 [2024-11-18 18:06:22.665274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.300 [2024-11-18 18:06:22.721938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.300 [2024-11-18 18:06:22.722364] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.300 [2024-11-18 18:06:22.722418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.300 [2024-11-18 18:06:22.722604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.300 [2024-11-18 18:06:22.722759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.300 [2024-11-18 18:06:22.722834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.300 [2024-11-18 18:06:22.723702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.300 [2024-11-18 18:06:22.723715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.236 18:06:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.236 18:06:23 -- common/autotest_common.sh@862 -- # return 0 00:08:05.236 18:06:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:05.236 18:06:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 18:06:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 [2024-11-18 18:06:23.579181] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 Malloc0 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.236 18:06:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.236 18:06:23 -- common/autotest_common.sh@10 -- # set +x 00:08:05.236 [2024-11-18 18:06:23.635823] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.236 18:06:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61641 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@30 -- # READ_PID=61643 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # config=() 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # local subsystem config 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61645 00:08:05.236 18:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:05.236 { 00:08:05.236 "params": { 00:08:05.236 "name": "Nvme$subsystem", 00:08:05.236 "trtype": "$TEST_TRANSPORT", 00:08:05.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.236 "adrfam": "ipv4", 00:08:05.236 "trsvcid": "$NVMF_PORT", 00:08:05.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.236 "hdgst": ${hdgst:-false}, 00:08:05.236 "ddgst": ${ddgst:-false} 00:08:05.236 }, 00:08:05.236 "method": "bdev_nvme_attach_controller" 00:08:05.236 } 00:08:05.236 EOF 00:08:05.236 )") 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61647 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@35 -- # sync 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # config=() 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # cat 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # local subsystem config 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:05.236 18:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:05.236 { 00:08:05.236 "params": { 00:08:05.236 "name": "Nvme$subsystem", 00:08:05.236 "trtype": "$TEST_TRANSPORT", 00:08:05.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.236 "adrfam": "ipv4", 00:08:05.236 "trsvcid": "$NVMF_PORT", 00:08:05.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.236 "hdgst": ${hdgst:-false}, 00:08:05.236 "ddgst": ${ddgst:-false} 00:08:05.236 }, 00:08:05.236 "method": "bdev_nvme_attach_controller" 00:08:05.236 } 00:08:05.236 EOF 00:08:05.236 )") 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # config=() 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # local subsystem config 00:08:05.236 18:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:05.236 18:06:23 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:05.236 { 00:08:05.236 "params": { 00:08:05.236 "name": "Nvme$subsystem", 00:08:05.236 "trtype": "$TEST_TRANSPORT", 00:08:05.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.236 "adrfam": "ipv4", 00:08:05.236 "trsvcid": "$NVMF_PORT", 00:08:05.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.236 "hdgst": ${hdgst:-false}, 00:08:05.236 "ddgst": ${ddgst:-false} 00:08:05.236 }, 00:08:05.236 "method": "bdev_nvme_attach_controller" 00:08:05.236 } 00:08:05.236 EOF 00:08:05.236 )") 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # config=() 00:08:05.236 18:06:23 -- nvmf/common.sh@520 -- # local subsystem config 00:08:05.236 18:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:05.236 { 00:08:05.236 "params": { 00:08:05.236 "name": "Nvme$subsystem", 00:08:05.236 "trtype": "$TEST_TRANSPORT", 00:08:05.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:05.236 "adrfam": "ipv4", 00:08:05.236 "trsvcid": "$NVMF_PORT", 00:08:05.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:05.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:05.236 "hdgst": ${hdgst:-false}, 00:08:05.236 "ddgst": ${ddgst:-false} 00:08:05.236 }, 00:08:05.236 "method": "bdev_nvme_attach_controller" 00:08:05.236 } 00:08:05.236 EOF 00:08:05.236 )") 00:08:05.236 18:06:23 -- nvmf/common.sh@544 -- # jq . 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # cat 00:08:05.236 18:06:23 -- nvmf/common.sh@545 -- # IFS=, 00:08:05.236 18:06:23 -- nvmf/common.sh@542 -- # cat 00:08:05.236 18:06:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:05.236 "params": { 00:08:05.237 "name": "Nvme1", 00:08:05.237 "trtype": "tcp", 00:08:05.237 "traddr": "10.0.0.2", 00:08:05.237 "adrfam": "ipv4", 00:08:05.237 "trsvcid": "4420", 00:08:05.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.237 "hdgst": false, 00:08:05.237 "ddgst": false 00:08:05.237 }, 00:08:05.237 "method": "bdev_nvme_attach_controller" 00:08:05.237 }' 00:08:05.237 18:06:23 -- nvmf/common.sh@542 -- # cat 00:08:05.237 18:06:23 -- nvmf/common.sh@544 -- # jq . 00:08:05.237 18:06:23 -- nvmf/common.sh@545 -- # IFS=, 00:08:05.237 18:06:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:05.237 "params": { 00:08:05.237 "name": "Nvme1", 00:08:05.237 "trtype": "tcp", 00:08:05.237 "traddr": "10.0.0.2", 00:08:05.237 "adrfam": "ipv4", 00:08:05.237 "trsvcid": "4420", 00:08:05.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.237 "hdgst": false, 00:08:05.237 "ddgst": false 00:08:05.237 }, 00:08:05.237 "method": "bdev_nvme_attach_controller" 00:08:05.237 }' 00:08:05.237 18:06:23 -- nvmf/common.sh@544 -- # jq . 00:08:05.237 18:06:23 -- nvmf/common.sh@545 -- # IFS=, 00:08:05.237 18:06:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:05.237 "params": { 00:08:05.237 "name": "Nvme1", 00:08:05.237 "trtype": "tcp", 00:08:05.237 "traddr": "10.0.0.2", 00:08:05.237 "adrfam": "ipv4", 00:08:05.237 "trsvcid": "4420", 00:08:05.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.237 "hdgst": false, 00:08:05.237 "ddgst": false 00:08:05.237 }, 00:08:05.237 "method": "bdev_nvme_attach_controller" 00:08:05.237 }' 00:08:05.237 18:06:23 -- nvmf/common.sh@544 -- # jq . 00:08:05.237 18:06:23 -- nvmf/common.sh@545 -- # IFS=, 00:08:05.237 18:06:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:05.237 "params": { 00:08:05.237 "name": "Nvme1", 00:08:05.237 "trtype": "tcp", 00:08:05.237 "traddr": "10.0.0.2", 00:08:05.237 "adrfam": "ipv4", 00:08:05.237 "trsvcid": "4420", 00:08:05.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:05.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:05.237 "hdgst": false, 00:08:05.237 "ddgst": false 00:08:05.237 }, 00:08:05.237 "method": "bdev_nvme_attach_controller" 00:08:05.237 }' 00:08:05.237 [2024-11-18 18:06:23.714318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.237 [2024-11-18 18:06:23.714671] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:05.237 [2024-11-18 18:06:23.721933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.237 [2024-11-18 18:06:23.722019] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:05.237 [2024-11-18 18:06:23.722406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.237 [2024-11-18 18:06:23.722705] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:05.237 18:06:23 -- target/bdev_io_wait.sh@37 -- # wait 61641 00:08:05.237 [2024-11-18 18:06:23.757517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.237 [2024-11-18 18:06:23.758342] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:05.496 [2024-11-18 18:06:23.887208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.496 [2024-11-18 18:06:23.929010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:05.496 [2024-11-18 18:06:23.936351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.496 [2024-11-18 18:06:23.974333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.496 [2024-11-18 18:06:23.990133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:05.496 [2024-11-18 18:06:24.020179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.496 [2024-11-18 18:06:24.040636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.496 Running I/O for 1 seconds... 00:08:05.496 [2024-11-18 18:06:24.073377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:05.754 Running I/O for 1 seconds... 00:08:05.754 Running I/O for 1 seconds... 00:08:05.754 Running I/O for 1 seconds... 00:08:06.691 00:08:06.691 Latency(us) 00:08:06.691 [2024-11-18T18:06:25.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.691 [2024-11-18T18:06:25.295Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:06.691 Nvme1n1 : 1.01 10303.84 40.25 0.00 0.00 12365.74 8340.95 21567.30 00:08:06.691 [2024-11-18T18:06:25.295Z] =================================================================================================================== 00:08:06.691 [2024-11-18T18:06:25.295Z] Total : 10303.84 40.25 0.00 0.00 12365.74 8340.95 21567.30 00:08:06.691 00:08:06.691 Latency(us) 00:08:06.691 [2024-11-18T18:06:25.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.691 [2024-11-18T18:06:25.295Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:06.691 Nvme1n1 : 1.03 4530.45 17.70 0.00 0.00 27758.29 10366.60 46709.29 00:08:06.691 [2024-11-18T18:06:25.295Z] =================================================================================================================== 00:08:06.691 [2024-11-18T18:06:25.295Z] Total : 4530.45 17.70 0.00 0.00 27758.29 10366.60 46709.29 00:08:06.691 00:08:06.691 Latency(us) 00:08:06.691 [2024-11-18T18:06:25.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.691 [2024-11-18T18:06:25.295Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:06.691 Nvme1n1 : 1.01 4524.42 17.67 0.00 0.00 28170.46 7983.48 60531.43 00:08:06.691 [2024-11-18T18:06:25.295Z] =================================================================================================================== 00:08:06.691 [2024-11-18T18:06:25.295Z] Total : 4524.42 17.67 0.00 0.00 28170.46 7983.48 60531.43 00:08:06.691 00:08:06.691 Latency(us) 00:08:06.691 [2024-11-18T18:06:25.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.691 [2024-11-18T18:06:25.295Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:06.691 Nvme1n1 : 1.00 171048.50 668.16 0.00 0.00 745.68 340.71 1079.85 00:08:06.691 [2024-11-18T18:06:25.295Z] =================================================================================================================== 00:08:06.691 [2024-11-18T18:06:25.295Z] Total : 171048.50 668.16 0.00 0.00 745.68 340.71 1079.85 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@38 -- # wait 61643 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@39 -- # wait 61645 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@40 -- # wait 61647 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.950 18:06:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.950 18:06:25 -- common/autotest_common.sh@10 -- # set +x 00:08:06.950 18:06:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:06.950 18:06:25 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:06.950 18:06:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:06.950 18:06:25 -- nvmf/common.sh@116 -- # sync 00:08:06.950 18:06:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:06.950 18:06:25 -- nvmf/common.sh@119 -- # set +e 00:08:06.950 18:06:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.950 18:06:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:06.950 rmmod nvme_tcp 00:08:06.950 rmmod nvme_fabrics 00:08:06.950 rmmod nvme_keyring 00:08:06.950 18:06:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:06.950 18:06:25 -- nvmf/common.sh@123 -- # set -e 00:08:06.950 18:06:25 -- nvmf/common.sh@124 -- # return 0 00:08:06.950 18:06:25 -- nvmf/common.sh@477 -- # '[' -n 61606 ']' 00:08:06.950 18:06:25 -- nvmf/common.sh@478 -- # killprocess 61606 00:08:06.950 18:06:25 -- common/autotest_common.sh@936 -- # '[' -z 61606 ']' 00:08:06.950 18:06:25 -- common/autotest_common.sh@940 -- # kill -0 61606 00:08:06.950 18:06:25 -- common/autotest_common.sh@941 -- # uname 00:08:06.950 18:06:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:06.950 18:06:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61606 00:08:06.950 killing process with pid 61606 00:08:06.950 18:06:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:06.950 18:06:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:06.950 18:06:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61606' 00:08:06.950 18:06:25 -- common/autotest_common.sh@955 -- # kill 61606 00:08:06.950 18:06:25 -- common/autotest_common.sh@960 -- # wait 61606 00:08:07.209 18:06:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.209 18:06:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:07.209 18:06:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:07.209 18:06:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.209 18:06:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:07.209 18:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.209 18:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.209 18:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.209 18:06:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:07.209 00:08:07.209 real 0m3.838s 00:08:07.209 user 0m16.548s 00:08:07.209 sys 0m1.885s 00:08:07.209 18:06:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.209 18:06:25 -- common/autotest_common.sh@10 -- # set +x 00:08:07.209 ************************************ 00:08:07.209 END TEST nvmf_bdev_io_wait 00:08:07.209 ************************************ 00:08:07.209 18:06:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.209 18:06:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:07.209 18:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.209 18:06:25 -- common/autotest_common.sh@10 -- # set +x 00:08:07.209 ************************************ 00:08:07.209 START TEST nvmf_queue_depth 00:08:07.209 ************************************ 00:08:07.209 18:06:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.480 * Looking for test storage... 00:08:07.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.480 18:06:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:07.480 18:06:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:07.480 18:06:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:07.480 18:06:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:07.480 18:06:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:07.480 18:06:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:07.480 18:06:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:07.480 18:06:25 -- scripts/common.sh@335 -- # IFS=.-: 00:08:07.480 18:06:25 -- scripts/common.sh@335 -- # read -ra ver1 00:08:07.480 18:06:25 -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.480 18:06:25 -- scripts/common.sh@336 -- # read -ra ver2 00:08:07.480 18:06:25 -- scripts/common.sh@337 -- # local 'op=<' 00:08:07.480 18:06:25 -- scripts/common.sh@339 -- # ver1_l=2 00:08:07.480 18:06:25 -- scripts/common.sh@340 -- # ver2_l=1 00:08:07.480 18:06:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:07.480 18:06:25 -- scripts/common.sh@343 -- # case "$op" in 00:08:07.480 18:06:25 -- scripts/common.sh@344 -- # : 1 00:08:07.480 18:06:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:07.480 18:06:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.480 18:06:25 -- scripts/common.sh@364 -- # decimal 1 00:08:07.480 18:06:25 -- scripts/common.sh@352 -- # local d=1 00:08:07.480 18:06:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.480 18:06:25 -- scripts/common.sh@354 -- # echo 1 00:08:07.480 18:06:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:07.480 18:06:25 -- scripts/common.sh@365 -- # decimal 2 00:08:07.480 18:06:25 -- scripts/common.sh@352 -- # local d=2 00:08:07.480 18:06:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.480 18:06:25 -- scripts/common.sh@354 -- # echo 2 00:08:07.480 18:06:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:07.480 18:06:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:07.480 18:06:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:07.480 18:06:25 -- scripts/common.sh@367 -- # return 0 00:08:07.480 18:06:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.480 18:06:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.480 --rc genhtml_branch_coverage=1 00:08:07.480 --rc genhtml_function_coverage=1 00:08:07.480 --rc genhtml_legend=1 00:08:07.480 --rc geninfo_all_blocks=1 00:08:07.480 --rc geninfo_unexecuted_blocks=1 00:08:07.480 00:08:07.480 ' 00:08:07.480 18:06:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.480 --rc genhtml_branch_coverage=1 00:08:07.480 --rc genhtml_function_coverage=1 00:08:07.480 --rc genhtml_legend=1 00:08:07.480 --rc geninfo_all_blocks=1 00:08:07.480 --rc geninfo_unexecuted_blocks=1 00:08:07.480 00:08:07.480 ' 00:08:07.480 18:06:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.480 --rc genhtml_branch_coverage=1 00:08:07.480 --rc genhtml_function_coverage=1 00:08:07.480 --rc genhtml_legend=1 00:08:07.480 --rc geninfo_all_blocks=1 00:08:07.480 --rc geninfo_unexecuted_blocks=1 00:08:07.480 00:08:07.480 ' 00:08:07.480 18:06:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:07.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.481 --rc genhtml_branch_coverage=1 00:08:07.481 --rc genhtml_function_coverage=1 00:08:07.481 --rc genhtml_legend=1 00:08:07.481 --rc geninfo_all_blocks=1 00:08:07.481 --rc geninfo_unexecuted_blocks=1 00:08:07.481 00:08:07.481 ' 00:08:07.481 18:06:25 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.481 18:06:25 -- nvmf/common.sh@7 -- # uname -s 00:08:07.481 18:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.481 18:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.481 18:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.481 18:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.481 18:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.481 18:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.481 18:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.481 18:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.481 18:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.481 18:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:07.481 18:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:07.481 18:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.481 18:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.481 18:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.481 18:06:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.481 18:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.481 18:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.481 18:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.481 18:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.481 18:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.481 18:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.481 18:06:25 -- paths/export.sh@5 -- # export PATH 00:08:07.481 18:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.481 18:06:25 -- nvmf/common.sh@46 -- # : 0 00:08:07.481 18:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.481 18:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.481 18:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.481 18:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.481 18:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.481 18:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.481 18:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.481 18:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.481 18:06:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.481 18:06:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.481 18:06:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.481 18:06:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.481 18:06:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.481 18:06:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.481 18:06:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.481 18:06:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.481 18:06:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.481 18:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.481 18:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.481 18:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.481 18:06:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:07.481 18:06:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:07.481 18:06:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.481 18:06:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.481 18:06:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.481 18:06:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:07.481 18:06:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.481 18:06:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.481 18:06:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.481 18:06:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.481 18:06:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.481 18:06:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.481 18:06:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.481 18:06:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.481 18:06:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:07.481 18:06:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:07.481 Cannot find device "nvmf_tgt_br" 00:08:07.481 18:06:26 -- nvmf/common.sh@154 -- # true 00:08:07.481 18:06:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.481 Cannot find device "nvmf_tgt_br2" 00:08:07.481 18:06:26 -- nvmf/common.sh@155 -- # true 00:08:07.481 18:06:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:07.481 18:06:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:07.481 Cannot find device "nvmf_tgt_br" 00:08:07.481 18:06:26 -- nvmf/common.sh@157 -- # true 00:08:07.481 18:06:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:07.481 Cannot find device "nvmf_tgt_br2" 00:08:07.481 18:06:26 -- nvmf/common.sh@158 -- # true 00:08:07.481 18:06:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:07.749 18:06:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:07.749 18:06:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.749 18:06:26 -- nvmf/common.sh@161 -- # true 00:08:07.749 18:06:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.749 18:06:26 -- nvmf/common.sh@162 -- # true 00:08:07.749 18:06:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.749 18:06:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.749 18:06:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.749 18:06:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.749 18:06:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.749 18:06:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.749 18:06:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.749 18:06:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.749 18:06:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.749 18:06:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:07.749 18:06:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:07.749 18:06:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:07.749 18:06:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:07.749 18:06:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.749 18:06:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.749 18:06:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.749 18:06:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:07.749 18:06:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:07.749 18:06:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.749 18:06:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.749 18:06:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.749 18:06:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.749 18:06:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.749 18:06:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:07.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:07.749 00:08:07.749 --- 10.0.0.2 ping statistics --- 00:08:07.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.749 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:07.749 18:06:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:07.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:08:07.749 00:08:07.749 --- 10.0.0.3 ping statistics --- 00:08:07.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.749 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:07.749 18:06:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:07.749 00:08:07.749 --- 10.0.0.1 ping statistics --- 00:08:07.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.749 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:07.749 18:06:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.749 18:06:26 -- nvmf/common.sh@421 -- # return 0 00:08:07.749 18:06:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.749 18:06:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.749 18:06:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.749 18:06:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.749 18:06:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.749 18:06:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.749 18:06:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.749 18:06:26 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:07.749 18:06:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.749 18:06:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.749 18:06:26 -- common/autotest_common.sh@10 -- # set +x 00:08:07.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.749 18:06:26 -- nvmf/common.sh@469 -- # nvmfpid=61859 00:08:07.749 18:06:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:07.749 18:06:26 -- nvmf/common.sh@470 -- # waitforlisten 61859 00:08:07.749 18:06:26 -- common/autotest_common.sh@829 -- # '[' -z 61859 ']' 00:08:07.749 18:06:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.749 18:06:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.749 18:06:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.749 18:06:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.749 18:06:26 -- common/autotest_common.sh@10 -- # set +x 00:08:08.009 [2024-11-18 18:06:26.398396] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.009 [2024-11-18 18:06:26.398493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.009 [2024-11-18 18:06:26.542875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.009 [2024-11-18 18:06:26.596939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.009 [2024-11-18 18:06:26.597116] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.009 [2024-11-18 18:06:26.597144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.009 [2024-11-18 18:06:26.597151] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.009 [2024-11-18 18:06:26.597175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.946 18:06:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.946 18:06:27 -- common/autotest_common.sh@862 -- # return 0 00:08:08.946 18:06:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.946 18:06:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 18:06:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.946 18:06:27 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.946 18:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 [2024-11-18 18:06:27.380393] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.946 18:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.946 18:06:27 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:08.946 18:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 Malloc0 00:08:08.946 18:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.946 18:06:27 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.946 18:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 18:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.946 18:06:27 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.946 18:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 18:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.946 18:06:27 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.946 18:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 [2024-11-18 18:06:27.446078] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.946 18:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.946 18:06:27 -- target/queue_depth.sh@30 -- # bdevperf_pid=61897 00:08:08.946 18:06:27 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:08.946 18:06:27 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.946 18:06:27 -- target/queue_depth.sh@33 -- # waitforlisten 61897 /var/tmp/bdevperf.sock 00:08:08.946 18:06:27 -- common/autotest_common.sh@829 -- # '[' -z 61897 ']' 00:08:08.946 18:06:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.946 18:06:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.946 18:06:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.946 18:06:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.946 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:08:08.946 [2024-11-18 18:06:27.505201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.946 [2024-11-18 18:06:27.505474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61897 ] 00:08:09.206 [2024-11-18 18:06:27.646554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.206 [2024-11-18 18:06:27.714393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.143 18:06:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.143 18:06:28 -- common/autotest_common.sh@862 -- # return 0 00:08:10.143 18:06:28 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:10.143 18:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.143 18:06:28 -- common/autotest_common.sh@10 -- # set +x 00:08:10.143 NVMe0n1 00:08:10.143 18:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.143 18:06:28 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.143 Running I/O for 10 seconds... 00:08:22.362 00:08:22.362 Latency(us) 00:08:22.362 [2024-11-18T18:06:40.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.362 [2024-11-18T18:06:40.966Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:22.362 Verification LBA range: start 0x0 length 0x4000 00:08:22.362 NVMe0n1 : 10.07 15463.45 60.40 0.00 0.00 65960.81 13881.72 56718.43 00:08:22.362 [2024-11-18T18:06:40.966Z] =================================================================================================================== 00:08:22.362 [2024-11-18T18:06:40.966Z] Total : 15463.45 60.40 0.00 0.00 65960.81 13881.72 56718.43 00:08:22.362 0 00:08:22.362 18:06:38 -- target/queue_depth.sh@39 -- # killprocess 61897 00:08:22.362 18:06:38 -- common/autotest_common.sh@936 -- # '[' -z 61897 ']' 00:08:22.362 18:06:38 -- common/autotest_common.sh@940 -- # kill -0 61897 00:08:22.362 18:06:38 -- common/autotest_common.sh@941 -- # uname 00:08:22.362 18:06:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.362 18:06:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61897 00:08:22.362 killing process with pid 61897 00:08:22.362 Received shutdown signal, test time was about 10.000000 seconds 00:08:22.362 00:08:22.362 Latency(us) 00:08:22.362 [2024-11-18T18:06:40.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.362 [2024-11-18T18:06:40.966Z] =================================================================================================================== 00:08:22.362 [2024-11-18T18:06:40.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:22.362 18:06:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:22.362 18:06:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:22.362 18:06:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61897' 00:08:22.362 18:06:38 -- common/autotest_common.sh@955 -- # kill 61897 00:08:22.362 18:06:38 -- common/autotest_common.sh@960 -- # wait 61897 00:08:22.362 18:06:38 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:22.362 18:06:38 -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:22.362 18:06:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:22.362 18:06:38 -- nvmf/common.sh@116 -- # sync 00:08:22.362 18:06:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:22.362 18:06:39 -- nvmf/common.sh@119 -- # set +e 00:08:22.362 18:06:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:22.362 18:06:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:22.362 rmmod nvme_tcp 00:08:22.362 rmmod nvme_fabrics 00:08:22.362 rmmod nvme_keyring 00:08:22.362 18:06:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:22.362 18:06:39 -- nvmf/common.sh@123 -- # set -e 00:08:22.362 18:06:39 -- nvmf/common.sh@124 -- # return 0 00:08:22.362 18:06:39 -- nvmf/common.sh@477 -- # '[' -n 61859 ']' 00:08:22.362 18:06:39 -- nvmf/common.sh@478 -- # killprocess 61859 00:08:22.362 18:06:39 -- common/autotest_common.sh@936 -- # '[' -z 61859 ']' 00:08:22.362 18:06:39 -- common/autotest_common.sh@940 -- # kill -0 61859 00:08:22.362 18:06:39 -- common/autotest_common.sh@941 -- # uname 00:08:22.362 18:06:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:22.362 18:06:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61859 00:08:22.362 killing process with pid 61859 00:08:22.362 18:06:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:22.362 18:06:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:22.362 18:06:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61859' 00:08:22.362 18:06:39 -- common/autotest_common.sh@955 -- # kill 61859 00:08:22.362 18:06:39 -- common/autotest_common.sh@960 -- # wait 61859 00:08:22.362 18:06:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:22.362 18:06:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:22.362 18:06:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:22.362 18:06:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.362 18:06:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:22.362 18:06:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.362 18:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.362 18:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.362 18:06:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:22.362 00:08:22.362 real 0m13.553s 00:08:22.362 user 0m23.576s 00:08:22.362 sys 0m1.942s 00:08:22.362 18:06:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.362 ************************************ 00:08:22.362 END TEST nvmf_queue_depth 00:08:22.362 ************************************ 00:08:22.362 18:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.362 18:06:39 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.362 18:06:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.362 18:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.362 ************************************ 00:08:22.362 START TEST nvmf_multipath 00:08:22.362 ************************************ 00:08:22.362 18:06:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:22.362 * Looking for test storage... 00:08:22.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:22.362 18:06:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.362 18:06:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.362 18:06:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.362 18:06:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.362 18:06:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.362 18:06:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.362 18:06:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.362 18:06:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.362 18:06:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.362 18:06:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.362 18:06:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.362 18:06:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.362 18:06:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.362 18:06:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.362 18:06:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.362 18:06:39 -- scripts/common.sh@344 -- # : 1 00:08:22.362 18:06:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.362 18:06:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.362 18:06:39 -- scripts/common.sh@364 -- # decimal 1 00:08:22.362 18:06:39 -- scripts/common.sh@352 -- # local d=1 00:08:22.362 18:06:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.362 18:06:39 -- scripts/common.sh@354 -- # echo 1 00:08:22.362 18:06:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.362 18:06:39 -- scripts/common.sh@365 -- # decimal 2 00:08:22.362 18:06:39 -- scripts/common.sh@352 -- # local d=2 00:08:22.362 18:06:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.362 18:06:39 -- scripts/common.sh@354 -- # echo 2 00:08:22.362 18:06:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.362 18:06:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.362 18:06:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.362 18:06:39 -- scripts/common.sh@367 -- # return 0 00:08:22.362 18:06:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.362 --rc genhtml_branch_coverage=1 00:08:22.362 --rc genhtml_function_coverage=1 00:08:22.362 --rc genhtml_legend=1 00:08:22.362 --rc geninfo_all_blocks=1 00:08:22.362 --rc geninfo_unexecuted_blocks=1 00:08:22.362 00:08:22.362 ' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.362 --rc genhtml_branch_coverage=1 00:08:22.362 --rc genhtml_function_coverage=1 00:08:22.362 --rc genhtml_legend=1 00:08:22.362 --rc geninfo_all_blocks=1 00:08:22.362 --rc geninfo_unexecuted_blocks=1 00:08:22.362 00:08:22.362 ' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.362 --rc genhtml_branch_coverage=1 00:08:22.362 --rc genhtml_function_coverage=1 00:08:22.362 --rc genhtml_legend=1 00:08:22.362 --rc geninfo_all_blocks=1 00:08:22.362 --rc geninfo_unexecuted_blocks=1 00:08:22.362 00:08:22.362 ' 00:08:22.362 18:06:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.362 --rc genhtml_branch_coverage=1 00:08:22.362 --rc genhtml_function_coverage=1 00:08:22.362 --rc genhtml_legend=1 00:08:22.362 --rc geninfo_all_blocks=1 00:08:22.362 --rc geninfo_unexecuted_blocks=1 00:08:22.362 00:08:22.362 ' 00:08:22.362 18:06:39 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:22.362 18:06:39 -- nvmf/common.sh@7 -- # uname -s 00:08:22.362 18:06:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.362 18:06:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.362 18:06:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.362 18:06:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.362 18:06:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.362 18:06:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.362 18:06:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.362 18:06:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.362 18:06:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.363 18:06:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.363 18:06:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:22.363 18:06:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:22.363 18:06:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.363 18:06:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.363 18:06:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:22.363 18:06:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.363 18:06:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.363 18:06:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.363 18:06:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.363 18:06:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.363 18:06:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.363 18:06:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.363 18:06:39 -- paths/export.sh@5 -- # export PATH 00:08:22.363 18:06:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.363 18:06:39 -- nvmf/common.sh@46 -- # : 0 00:08:22.363 18:06:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.363 18:06:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.363 18:06:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.363 18:06:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.363 18:06:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.363 18:06:39 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.363 18:06:39 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.363 18:06:39 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:22.363 18:06:39 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.363 18:06:39 -- target/multipath.sh@43 -- # nvmftestinit 00:08:22.363 18:06:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.363 18:06:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:22.363 18:06:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:22.363 18:06:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:22.363 18:06:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.363 18:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.363 18:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.363 18:06:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:22.363 18:06:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.363 18:06:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.363 18:06:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:22.363 18:06:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:22.363 18:06:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:22.363 18:06:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:22.363 18:06:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:22.363 18:06:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.363 18:06:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:22.363 18:06:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:22.363 18:06:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:22.363 18:06:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:22.363 18:06:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:22.363 18:06:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:22.363 Cannot find device "nvmf_tgt_br" 00:08:22.363 18:06:39 -- nvmf/common.sh@154 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:22.363 Cannot find device "nvmf_tgt_br2" 00:08:22.363 18:06:39 -- nvmf/common.sh@155 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:22.363 18:06:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:22.363 Cannot find device "nvmf_tgt_br" 00:08:22.363 18:06:39 -- nvmf/common.sh@157 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:22.363 Cannot find device "nvmf_tgt_br2" 00:08:22.363 18:06:39 -- nvmf/common.sh@158 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:22.363 18:06:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:22.363 18:06:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:22.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.363 18:06:39 -- nvmf/common.sh@161 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:22.363 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:22.363 18:06:39 -- nvmf/common.sh@162 -- # true 00:08:22.363 18:06:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:22.363 18:06:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:22.363 18:06:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:22.363 18:06:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:22.363 18:06:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:22.363 18:06:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:22.363 18:06:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:22.363 18:06:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:22.363 18:06:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:22.363 18:06:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:22.363 18:06:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:22.363 18:06:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:22.363 18:06:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:22.363 18:06:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:22.363 18:06:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:22.363 18:06:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:22.363 18:06:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:22.363 18:06:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:22.363 18:06:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:22.363 18:06:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:22.363 18:06:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:22.363 18:06:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:22.363 18:06:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:22.363 18:06:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:22.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:22.363 00:08:22.363 --- 10.0.0.2 ping statistics --- 00:08:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.363 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:22.363 18:06:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:22.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:22.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:22.363 00:08:22.363 --- 10.0.0.3 ping statistics --- 00:08:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.363 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:22.363 18:06:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:22.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:22.363 00:08:22.363 --- 10.0.0.1 ping statistics --- 00:08:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.363 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:22.363 18:06:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.363 18:06:39 -- nvmf/common.sh@421 -- # return 0 00:08:22.363 18:06:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.363 18:06:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:22.363 18:06:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.363 18:06:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:22.363 18:06:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:22.363 18:06:39 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:22.363 18:06:39 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:22.363 18:06:39 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:22.364 18:06:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:22.364 18:06:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.364 18:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 18:06:39 -- nvmf/common.sh@469 -- # nvmfpid=62223 00:08:22.364 18:06:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.364 18:06:39 -- nvmf/common.sh@470 -- # waitforlisten 62223 00:08:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.364 18:06:39 -- common/autotest_common.sh@829 -- # '[' -z 62223 ']' 00:08:22.364 18:06:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.364 18:06:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.364 18:06:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.364 18:06:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.364 18:06:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 [2024-11-18 18:06:40.014368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.364 [2024-11-18 18:06:40.014474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.364 [2024-11-18 18:06:40.149180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.364 [2024-11-18 18:06:40.203839] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.364 [2024-11-18 18:06:40.204231] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.364 [2024-11-18 18:06:40.204287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.364 [2024-11-18 18:06:40.204418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.364 [2024-11-18 18:06:40.204590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.364 [2024-11-18 18:06:40.204883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.364 [2024-11-18 18:06:40.204879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.364 [2024-11-18 18:06:40.204714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.623 18:06:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.623 18:06:40 -- common/autotest_common.sh@862 -- # return 0 00:08:22.623 18:06:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:22.623 18:06:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.623 18:06:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.623 18:06:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.623 18:06:41 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:22.882 [2024-11-18 18:06:41.307617] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.882 18:06:41 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:23.142 Malloc0 00:08:23.142 18:06:41 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:23.401 18:06:41 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.660 18:06:42 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.919 [2024-11-18 18:06:42.347168] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.919 18:06:42 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:24.178 [2024-11-18 18:06:42.591487] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:24.178 18:06:42 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:24.178 18:06:42 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:24.438 18:06:42 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.438 18:06:42 -- common/autotest_common.sh@1187 -- # local i=0 00:08:24.438 18:06:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.438 18:06:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:24.438 18:06:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:26.343 18:06:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:26.343 18:06:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:26.343 18:06:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.343 18:06:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:26.343 18:06:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.343 18:06:44 -- common/autotest_common.sh@1197 -- # return 0 00:08:26.343 18:06:44 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:26.343 18:06:44 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:26.343 18:06:44 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:26.343 18:06:44 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.343 18:06:44 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:26.343 18:06:44 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:26.343 18:06:44 -- target/multipath.sh@38 -- # return 0 00:08:26.343 18:06:44 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:26.343 18:06:44 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:26.343 18:06:44 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:26.343 18:06:44 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:26.343 18:06:44 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:26.343 18:06:44 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:26.343 18:06:44 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:26.343 18:06:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:26.343 18:06:44 -- target/multipath.sh@22 -- # local timeout=20 00:08:26.343 18:06:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:26.343 18:06:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:26.343 18:06:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.343 18:06:44 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:26.343 18:06:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:26.343 18:06:44 -- target/multipath.sh@22 -- # local timeout=20 00:08:26.343 18:06:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:26.343 18:06:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:26.343 18:06:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:26.343 18:06:44 -- target/multipath.sh@85 -- # echo numa 00:08:26.343 18:06:44 -- target/multipath.sh@88 -- # fio_pid=62318 00:08:26.343 18:06:44 -- target/multipath.sh@90 -- # sleep 1 00:08:26.343 18:06:44 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:26.343 [global] 00:08:26.343 thread=1 00:08:26.343 invalidate=1 00:08:26.343 rw=randrw 00:08:26.343 time_based=1 00:08:26.343 runtime=6 00:08:26.343 ioengine=libaio 00:08:26.343 direct=1 00:08:26.343 bs=4096 00:08:26.343 iodepth=128 00:08:26.343 norandommap=0 00:08:26.343 numjobs=1 00:08:26.343 00:08:26.343 verify_dump=1 00:08:26.343 verify_backlog=512 00:08:26.343 verify_state_save=0 00:08:26.343 do_verify=1 00:08:26.343 verify=crc32c-intel 00:08:26.343 [job0] 00:08:26.343 filename=/dev/nvme0n1 00:08:26.602 Could not set queue depth (nvme0n1) 00:08:26.602 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.602 fio-3.35 00:08:26.602 Starting 1 thread 00:08:27.557 18:06:45 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:27.823 18:06:46 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:28.082 18:06:46 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:28.082 18:06:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:28.082 18:06:46 -- target/multipath.sh@22 -- # local timeout=20 00:08:28.082 18:06:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.082 18:06:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.082 18:06:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.082 18:06:46 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:28.082 18:06:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:28.082 18:06:46 -- target/multipath.sh@22 -- # local timeout=20 00:08:28.082 18:06:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.082 18:06:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.082 18:06:46 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.082 18:06:46 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:28.341 18:06:46 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:28.600 18:06:46 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:28.600 18:06:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:28.600 18:06:46 -- target/multipath.sh@22 -- # local timeout=20 00:08:28.600 18:06:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:28.600 18:06:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:28.600 18:06:46 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:28.600 18:06:46 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:28.600 18:06:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:28.600 18:06:46 -- target/multipath.sh@22 -- # local timeout=20 00:08:28.600 18:06:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:28.600 18:06:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:28.600 18:06:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:28.600 18:06:46 -- target/multipath.sh@104 -- # wait 62318 00:08:32.788 00:08:32.788 job0: (groupid=0, jobs=1): err= 0: pid=62339: Mon Nov 18 18:06:51 2024 00:08:32.788 read: IOPS=10.7k, BW=42.0MiB/s (44.0MB/s)(252MiB/6005msec) 00:08:32.789 slat (usec): min=7, max=7790, avg=54.48, stdev=234.64 00:08:32.789 clat (usec): min=1155, max=17216, avg=8035.36, stdev=1451.50 00:08:32.789 lat (usec): min=1165, max=17224, avg=8089.84, stdev=1456.60 00:08:32.789 clat percentiles (usec): 00:08:32.789 | 1.00th=[ 4293], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:08:32.789 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:08:32.789 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[11207], 00:08:32.789 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13435], 99.95th=[13698], 00:08:32.789 | 99.99th=[14222] 00:08:32.789 bw ( KiB/s): min= 7400, max=28248, per=52.75%, avg=22668.91, stdev=6097.51, samples=11 00:08:32.789 iops : min= 1850, max= 7062, avg=5667.18, stdev=1524.36, samples=11 00:08:32.789 write: IOPS=6336, BW=24.8MiB/s (26.0MB/s)(134MiB/5411msec); 0 zone resets 00:08:32.789 slat (usec): min=13, max=2254, avg=63.31, stdev=159.07 00:08:32.789 clat (usec): min=1668, max=17025, avg=7100.26, stdev=1301.66 00:08:32.789 lat (usec): min=1692, max=17076, avg=7163.57, stdev=1307.43 00:08:32.789 clat percentiles (usec): 00:08:32.789 | 1.00th=[ 3195], 5.00th=[ 4146], 10.00th=[ 5538], 20.00th=[ 6587], 00:08:32.789 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7439], 00:08:32.789 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 8717], 00:08:32.789 | 99.00th=[10945], 99.50th=[11469], 99.90th=[12387], 99.95th=[12649], 00:08:32.789 | 99.99th=[13435] 00:08:32.789 bw ( KiB/s): min= 7800, max=27856, per=89.55%, avg=22697.73, stdev=5911.52, samples=11 00:08:32.789 iops : min= 1950, max= 6964, avg=5674.36, stdev=1477.84, samples=11 00:08:32.789 lat (msec) : 2=0.02%, 4=1.89%, 10=91.92%, 20=6.17% 00:08:32.789 cpu : usr=5.58%, sys=20.72%, ctx=5594, majf=0, minf=90 00:08:32.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:32.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.789 issued rwts: total=64514,34288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.789 00:08:32.789 Run status group 0 (all jobs): 00:08:32.789 READ: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=252MiB (264MB), run=6005-6005msec 00:08:32.789 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=134MiB (140MB), run=5411-5411msec 00:08:32.789 00:08:32.789 Disk stats (read/write): 00:08:32.789 nvme0n1: ios=63601/33665, merge=0/0, ticks=488881/224271, in_queue=713152, util=98.60% 00:08:32.789 18:06:51 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:33.048 18:06:51 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:33.307 18:06:51 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:33.307 18:06:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:33.307 18:06:51 -- target/multipath.sh@22 -- # local timeout=20 00:08:33.307 18:06:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.307 18:06:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.307 18:06:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.307 18:06:51 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:33.307 18:06:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:33.307 18:06:51 -- target/multipath.sh@22 -- # local timeout=20 00:08:33.307 18:06:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.307 18:06:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.307 18:06:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.307 18:06:51 -- target/multipath.sh@113 -- # echo round-robin 00:08:33.307 18:06:51 -- target/multipath.sh@116 -- # fio_pid=62414 00:08:33.307 18:06:51 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:33.307 18:06:51 -- target/multipath.sh@118 -- # sleep 1 00:08:33.307 [global] 00:08:33.307 thread=1 00:08:33.307 invalidate=1 00:08:33.307 rw=randrw 00:08:33.307 time_based=1 00:08:33.307 runtime=6 00:08:33.307 ioengine=libaio 00:08:33.307 direct=1 00:08:33.307 bs=4096 00:08:33.307 iodepth=128 00:08:33.307 norandommap=0 00:08:33.307 numjobs=1 00:08:33.307 00:08:33.307 verify_dump=1 00:08:33.307 verify_backlog=512 00:08:33.307 verify_state_save=0 00:08:33.307 do_verify=1 00:08:33.307 verify=crc32c-intel 00:08:33.307 [job0] 00:08:33.307 filename=/dev/nvme0n1 00:08:33.307 Could not set queue depth (nvme0n1) 00:08:33.565 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:33.565 fio-3.35 00:08:33.565 Starting 1 thread 00:08:34.502 18:06:52 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:34.502 18:06:53 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:35.069 18:06:53 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:35.069 18:06:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:35.069 18:06:53 -- target/multipath.sh@22 -- # local timeout=20 00:08:35.069 18:06:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.069 18:06:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.069 18:06:53 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.069 18:06:53 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:35.069 18:06:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:35.069 18:06:53 -- target/multipath.sh@22 -- # local timeout=20 00:08:35.069 18:06:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.069 18:06:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.069 18:06:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.069 18:06:53 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:35.069 18:06:53 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:35.328 18:06:53 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:35.328 18:06:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:35.328 18:06:53 -- target/multipath.sh@22 -- # local timeout=20 00:08:35.328 18:06:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.328 18:06:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.328 18:06:53 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.328 18:06:53 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:35.328 18:06:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:35.328 18:06:53 -- target/multipath.sh@22 -- # local timeout=20 00:08:35.328 18:06:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.328 18:06:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.328 18:06:53 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.328 18:06:53 -- target/multipath.sh@132 -- # wait 62414 00:08:39.519 00:08:39.519 job0: (groupid=0, jobs=1): err= 0: pid=62441: Mon Nov 18 18:06:58 2024 00:08:39.519 read: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(276MiB/6006msec) 00:08:39.519 slat (usec): min=5, max=7188, avg=41.16, stdev=193.93 00:08:39.519 clat (usec): min=311, max=18300, avg=7342.77, stdev=2062.37 00:08:39.519 lat (usec): min=327, max=18308, avg=7383.92, stdev=2074.48 00:08:39.519 clat percentiles (usec): 00:08:39.519 | 1.00th=[ 1614], 5.00th=[ 3687], 10.00th=[ 4490], 20.00th=[ 5866], 00:08:39.519 | 30.00th=[ 6849], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:08:39.519 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[11076], 00:08:39.519 | 99.00th=[12518], 99.50th=[13304], 99.90th=[16057], 99.95th=[16581], 00:08:39.519 | 99.99th=[17171] 00:08:39.519 bw ( KiB/s): min=14048, max=37696, per=53.30%, avg=25071.27, stdev=7778.12, samples=11 00:08:39.519 iops : min= 3512, max= 9424, avg=6267.82, stdev=1944.53, samples=11 00:08:39.519 write: IOPS=6977, BW=27.3MiB/s (28.6MB/s)(147MiB/5386msec); 0 zone resets 00:08:39.519 slat (usec): min=14, max=2926, avg=54.39, stdev=137.36 00:08:39.519 clat (usec): min=246, max=16066, avg=6495.06, stdev=1923.95 00:08:39.519 lat (usec): min=276, max=16088, avg=6549.45, stdev=1935.61 00:08:39.519 clat percentiles (usec): 00:08:39.519 | 1.00th=[ 1287], 5.00th=[ 2999], 10.00th=[ 3621], 20.00th=[ 4686], 00:08:39.519 | 30.00th=[ 6128], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7308], 00:08:39.519 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 8586], 00:08:39.519 | 99.00th=[11207], 99.50th=[12125], 99.90th=[15270], 99.95th=[15664], 00:08:39.519 | 99.99th=[16057] 00:08:39.519 bw ( KiB/s): min=14848, max=36936, per=89.87%, avg=25084.18, stdev=7502.77, samples=11 00:08:39.519 iops : min= 3712, max= 9234, avg=6271.00, stdev=1875.61, samples=11 00:08:39.519 lat (usec) : 250=0.01%, 500=0.03%, 750=0.12%, 1000=0.18% 00:08:39.519 lat (msec) : 2=1.47%, 4=7.53%, 10=85.14%, 20=5.53% 00:08:39.519 cpu : usr=5.76%, sys=22.30%, ctx=6172, majf=0, minf=108 00:08:39.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:08:39.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.519 issued rwts: total=70627,37583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.519 00:08:39.519 Run status group 0 (all jobs): 00:08:39.519 READ: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=276MiB (289MB), run=6006-6006msec 00:08:39.519 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=147MiB (154MB), run=5386-5386msec 00:08:39.519 00:08:39.519 Disk stats (read/write): 00:08:39.519 nvme0n1: ios=69981/36690, merge=0/0, ticks=491446/222958, in_queue=714404, util=98.60% 00:08:39.778 18:06:58 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:39.778 18:06:58 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.778 18:06:58 -- common/autotest_common.sh@1208 -- # local i=0 00:08:39.778 18:06:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:39.778 18:06:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.778 18:06:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:39.778 18:06:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.778 18:06:58 -- common/autotest_common.sh@1220 -- # return 0 00:08:39.778 18:06:58 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.036 18:06:58 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:40.036 18:06:58 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:40.036 18:06:58 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:40.036 18:06:58 -- target/multipath.sh@144 -- # nvmftestfini 00:08:40.036 18:06:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.036 18:06:58 -- nvmf/common.sh@116 -- # sync 00:08:40.036 18:06:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:40.036 18:06:58 -- nvmf/common.sh@119 -- # set +e 00:08:40.036 18:06:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.036 18:06:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:40.036 rmmod nvme_tcp 00:08:40.036 rmmod nvme_fabrics 00:08:40.036 rmmod nvme_keyring 00:08:40.036 18:06:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.036 18:06:58 -- nvmf/common.sh@123 -- # set -e 00:08:40.036 18:06:58 -- nvmf/common.sh@124 -- # return 0 00:08:40.036 18:06:58 -- nvmf/common.sh@477 -- # '[' -n 62223 ']' 00:08:40.037 18:06:58 -- nvmf/common.sh@478 -- # killprocess 62223 00:08:40.037 18:06:58 -- common/autotest_common.sh@936 -- # '[' -z 62223 ']' 00:08:40.037 18:06:58 -- common/autotest_common.sh@940 -- # kill -0 62223 00:08:40.037 18:06:58 -- common/autotest_common.sh@941 -- # uname 00:08:40.037 18:06:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.037 18:06:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62223 00:08:40.037 killing process with pid 62223 00:08:40.037 18:06:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.037 18:06:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.037 18:06:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62223' 00:08:40.037 18:06:58 -- common/autotest_common.sh@955 -- # kill 62223 00:08:40.037 18:06:58 -- common/autotest_common.sh@960 -- # wait 62223 00:08:40.295 18:06:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.295 18:06:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:40.295 18:06:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:40.295 18:06:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.295 18:06:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:40.295 18:06:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.295 18:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.295 18:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.295 18:06:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:40.295 00:08:40.295 real 0m19.462s 00:08:40.295 user 1m12.714s 00:08:40.295 sys 0m9.884s 00:08:40.295 18:06:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.295 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:40.295 ************************************ 00:08:40.295 END TEST nvmf_multipath 00:08:40.295 ************************************ 00:08:40.555 18:06:58 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.555 18:06:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.555 18:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.555 18:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:40.555 ************************************ 00:08:40.555 START TEST nvmf_zcopy 00:08:40.555 ************************************ 00:08:40.555 18:06:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.555 * Looking for test storage... 00:08:40.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.555 18:06:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:40.555 18:06:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:40.555 18:06:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:40.555 18:06:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:40.555 18:06:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:40.555 18:06:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:40.555 18:06:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:40.555 18:06:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:40.555 18:06:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:40.555 18:06:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.555 18:06:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:40.555 18:06:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:40.555 18:06:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:40.555 18:06:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:40.555 18:06:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:40.555 18:06:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:40.555 18:06:59 -- scripts/common.sh@344 -- # : 1 00:08:40.555 18:06:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:40.555 18:06:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.555 18:06:59 -- scripts/common.sh@364 -- # decimal 1 00:08:40.555 18:06:59 -- scripts/common.sh@352 -- # local d=1 00:08:40.555 18:06:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.555 18:06:59 -- scripts/common.sh@354 -- # echo 1 00:08:40.555 18:06:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:40.555 18:06:59 -- scripts/common.sh@365 -- # decimal 2 00:08:40.555 18:06:59 -- scripts/common.sh@352 -- # local d=2 00:08:40.555 18:06:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.555 18:06:59 -- scripts/common.sh@354 -- # echo 2 00:08:40.555 18:06:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:40.555 18:06:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:40.555 18:06:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:40.555 18:06:59 -- scripts/common.sh@367 -- # return 0 00:08:40.555 18:06:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.555 18:06:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:40.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.555 --rc genhtml_branch_coverage=1 00:08:40.555 --rc genhtml_function_coverage=1 00:08:40.555 --rc genhtml_legend=1 00:08:40.555 --rc geninfo_all_blocks=1 00:08:40.555 --rc geninfo_unexecuted_blocks=1 00:08:40.555 00:08:40.555 ' 00:08:40.555 18:06:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:40.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.555 --rc genhtml_branch_coverage=1 00:08:40.555 --rc genhtml_function_coverage=1 00:08:40.555 --rc genhtml_legend=1 00:08:40.555 --rc geninfo_all_blocks=1 00:08:40.555 --rc geninfo_unexecuted_blocks=1 00:08:40.555 00:08:40.555 ' 00:08:40.555 18:06:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:40.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.555 --rc genhtml_branch_coverage=1 00:08:40.555 --rc genhtml_function_coverage=1 00:08:40.555 --rc genhtml_legend=1 00:08:40.555 --rc geninfo_all_blocks=1 00:08:40.555 --rc geninfo_unexecuted_blocks=1 00:08:40.555 00:08:40.555 ' 00:08:40.555 18:06:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:40.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.555 --rc genhtml_branch_coverage=1 00:08:40.555 --rc genhtml_function_coverage=1 00:08:40.555 --rc genhtml_legend=1 00:08:40.555 --rc geninfo_all_blocks=1 00:08:40.555 --rc geninfo_unexecuted_blocks=1 00:08:40.555 00:08:40.555 ' 00:08:40.555 18:06:59 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.555 18:06:59 -- nvmf/common.sh@7 -- # uname -s 00:08:40.555 18:06:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.555 18:06:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.555 18:06:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.555 18:06:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.555 18:06:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.555 18:06:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.555 18:06:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.555 18:06:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.555 18:06:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.555 18:06:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.555 18:06:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:40.556 18:06:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:08:40.556 18:06:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.556 18:06:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.556 18:06:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:40.556 18:06:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.556 18:06:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.556 18:06:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.556 18:06:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.556 18:06:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.556 18:06:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.556 18:06:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.556 18:06:59 -- paths/export.sh@5 -- # export PATH 00:08:40.556 18:06:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.556 18:06:59 -- nvmf/common.sh@46 -- # : 0 00:08:40.556 18:06:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:40.556 18:06:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:40.556 18:06:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:40.556 18:06:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.556 18:06:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.556 18:06:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:40.556 18:06:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:40.556 18:06:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:40.556 18:06:59 -- target/zcopy.sh@12 -- # nvmftestinit 00:08:40.556 18:06:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:40.556 18:06:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.556 18:06:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:40.556 18:06:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:40.556 18:06:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:40.556 18:06:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.556 18:06:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.556 18:06:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.556 18:06:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:40.556 18:06:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:40.556 18:06:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:40.556 18:06:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:40.556 18:06:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:40.556 18:06:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:40.556 18:06:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.556 18:06:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.556 18:06:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:40.556 18:06:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:40.556 18:06:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:40.556 18:06:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:40.556 18:06:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:40.556 18:06:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.556 18:06:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:40.556 18:06:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:40.556 18:06:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:40.556 18:06:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:40.556 18:06:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:40.556 18:06:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:40.556 Cannot find device "nvmf_tgt_br" 00:08:40.556 18:06:59 -- nvmf/common.sh@154 -- # true 00:08:40.556 18:06:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.556 Cannot find device "nvmf_tgt_br2" 00:08:40.556 18:06:59 -- nvmf/common.sh@155 -- # true 00:08:40.556 18:06:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:40.556 18:06:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:40.815 Cannot find device "nvmf_tgt_br" 00:08:40.815 18:06:59 -- nvmf/common.sh@157 -- # true 00:08:40.815 18:06:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:40.815 Cannot find device "nvmf_tgt_br2" 00:08:40.815 18:06:59 -- nvmf/common.sh@158 -- # true 00:08:40.815 18:06:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:40.815 18:06:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:40.815 18:06:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.815 18:06:59 -- nvmf/common.sh@161 -- # true 00:08:40.815 18:06:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.815 18:06:59 -- nvmf/common.sh@162 -- # true 00:08:40.815 18:06:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:40.815 18:06:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:40.815 18:06:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:40.815 18:06:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:40.815 18:06:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:40.815 18:06:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:40.815 18:06:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:40.815 18:06:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:40.815 18:06:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:40.815 18:06:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:40.815 18:06:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:40.815 18:06:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:40.815 18:06:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:40.815 18:06:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:40.815 18:06:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:40.815 18:06:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:40.815 18:06:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:40.815 18:06:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:40.815 18:06:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:40.815 18:06:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:40.815 18:06:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:40.815 18:06:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:40.815 18:06:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:40.815 18:06:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:40.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:40.815 00:08:40.815 --- 10.0.0.2 ping statistics --- 00:08:40.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.815 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:40.815 18:06:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:40.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:40.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:40.815 00:08:40.815 --- 10.0.0.3 ping statistics --- 00:08:40.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.815 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:40.815 18:06:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:40.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:40.815 00:08:40.815 --- 10.0.0.1 ping statistics --- 00:08:40.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.815 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:40.815 18:06:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.815 18:06:59 -- nvmf/common.sh@421 -- # return 0 00:08:40.815 18:06:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:40.815 18:06:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.815 18:06:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:40.815 18:06:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:40.816 18:06:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.816 18:06:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:40.816 18:06:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:41.074 18:06:59 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:41.074 18:06:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:41.074 18:06:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.074 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 18:06:59 -- nvmf/common.sh@469 -- # nvmfpid=62695 00:08:41.074 18:06:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:41.074 18:06:59 -- nvmf/common.sh@470 -- # waitforlisten 62695 00:08:41.074 18:06:59 -- common/autotest_common.sh@829 -- # '[' -z 62695 ']' 00:08:41.074 18:06:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.074 18:06:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.074 18:06:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.074 18:06:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.074 18:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:41.074 [2024-11-18 18:06:59.491818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.074 [2024-11-18 18:06:59.492115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.074 [2024-11-18 18:06:59.626561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.333 [2024-11-18 18:06:59.703072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.333 [2024-11-18 18:06:59.703199] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.333 [2024-11-18 18:06:59.703210] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.333 [2024-11-18 18:06:59.703218] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.333 [2024-11-18 18:06:59.703247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.900 18:07:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.900 18:07:00 -- common/autotest_common.sh@862 -- # return 0 00:08:41.900 18:07:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:41.900 18:07:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.900 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 18:07:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.161 18:07:00 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:42.161 18:07:00 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 [2024-11-18 18:07:00.509666] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 [2024-11-18 18:07:00.525802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 malloc0 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:42.161 18:07:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.161 18:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:42.161 18:07:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.161 18:07:00 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:42.161 18:07:00 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:42.161 18:07:00 -- nvmf/common.sh@520 -- # config=() 00:08:42.161 18:07:00 -- nvmf/common.sh@520 -- # local subsystem config 00:08:42.161 18:07:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:42.161 18:07:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:42.161 { 00:08:42.161 "params": { 00:08:42.161 "name": "Nvme$subsystem", 00:08:42.161 "trtype": "$TEST_TRANSPORT", 00:08:42.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.161 "adrfam": "ipv4", 00:08:42.161 "trsvcid": "$NVMF_PORT", 00:08:42.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.161 "hdgst": ${hdgst:-false}, 00:08:42.161 "ddgst": ${ddgst:-false} 00:08:42.161 }, 00:08:42.161 "method": "bdev_nvme_attach_controller" 00:08:42.161 } 00:08:42.161 EOF 00:08:42.161 )") 00:08:42.161 18:07:00 -- nvmf/common.sh@542 -- # cat 00:08:42.161 18:07:00 -- nvmf/common.sh@544 -- # jq . 00:08:42.161 18:07:00 -- nvmf/common.sh@545 -- # IFS=, 00:08:42.161 18:07:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:42.161 "params": { 00:08:42.161 "name": "Nvme1", 00:08:42.161 "trtype": "tcp", 00:08:42.161 "traddr": "10.0.0.2", 00:08:42.161 "adrfam": "ipv4", 00:08:42.161 "trsvcid": "4420", 00:08:42.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.161 "hdgst": false, 00:08:42.161 "ddgst": false 00:08:42.161 }, 00:08:42.161 "method": "bdev_nvme_attach_controller" 00:08:42.161 }' 00:08:42.161 [2024-11-18 18:07:00.616217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.161 [2024-11-18 18:07:00.616303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62728 ] 00:08:42.161 [2024-11-18 18:07:00.757730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.420 [2024-11-18 18:07:00.825969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.420 Running I/O for 10 seconds... 00:08:52.467 00:08:52.467 Latency(us) 00:08:52.467 [2024-11-18T18:07:11.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.467 [2024-11-18T18:07:11.071Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:52.467 Verification LBA range: start 0x0 length 0x1000 00:08:52.467 Nvme1n1 : 10.01 10169.27 79.45 0.00 0.00 12555.10 1288.38 18945.86 00:08:52.467 [2024-11-18T18:07:11.071Z] =================================================================================================================== 00:08:52.467 [2024-11-18T18:07:11.071Z] Total : 10169.27 79.45 0.00 0.00 12555.10 1288.38 18945.86 00:08:52.726 18:07:11 -- target/zcopy.sh@39 -- # perfpid=62845 00:08:52.726 18:07:11 -- target/zcopy.sh@41 -- # xtrace_disable 00:08:52.726 18:07:11 -- common/autotest_common.sh@10 -- # set +x 00:08:52.726 18:07:11 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:52.726 18:07:11 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:52.726 18:07:11 -- nvmf/common.sh@520 -- # config=() 00:08:52.726 18:07:11 -- nvmf/common.sh@520 -- # local subsystem config 00:08:52.726 18:07:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:52.726 18:07:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:52.726 { 00:08:52.726 "params": { 00:08:52.726 "name": "Nvme$subsystem", 00:08:52.726 "trtype": "$TEST_TRANSPORT", 00:08:52.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.726 "adrfam": "ipv4", 00:08:52.726 "trsvcid": "$NVMF_PORT", 00:08:52.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.726 "hdgst": ${hdgst:-false}, 00:08:52.726 "ddgst": ${ddgst:-false} 00:08:52.726 }, 00:08:52.726 "method": "bdev_nvme_attach_controller" 00:08:52.726 } 00:08:52.726 EOF 00:08:52.726 )") 00:08:52.726 18:07:11 -- nvmf/common.sh@542 -- # cat 00:08:52.726 [2024-11-18 18:07:11.165860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.166096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 18:07:11 -- nvmf/common.sh@544 -- # jq . 00:08:52.726 18:07:11 -- nvmf/common.sh@545 -- # IFS=, 00:08:52.726 18:07:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:52.726 "params": { 00:08:52.726 "name": "Nvme1", 00:08:52.726 "trtype": "tcp", 00:08:52.726 "traddr": "10.0.0.2", 00:08:52.726 "adrfam": "ipv4", 00:08:52.726 "trsvcid": "4420", 00:08:52.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.726 "hdgst": false, 00:08:52.726 "ddgst": false 00:08:52.726 }, 00:08:52.726 "method": "bdev_nvme_attach_controller" 00:08:52.726 }' 00:08:52.726 [2024-11-18 18:07:11.173826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.173859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.181824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.181854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.189826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.189855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.201826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.201856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.207007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.726 [2024-11-18 18:07:11.207262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62845 ] 00:08:52.726 [2024-11-18 18:07:11.213829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.213986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.225840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.225984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.237835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.238000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.249835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.249978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.726 [2024-11-18 18:07:11.261844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.726 [2024-11-18 18:07:11.262004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.727 [2024-11-18 18:07:11.273845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.727 [2024-11-18 18:07:11.274002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.727 [2024-11-18 18:07:11.285849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.727 [2024-11-18 18:07:11.285993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.727 [2024-11-18 18:07:11.297859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.727 [2024-11-18 18:07:11.297993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.727 [2024-11-18 18:07:11.309866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.727 [2024-11-18 18:07:11.310016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.727 [2024-11-18 18:07:11.321870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.727 [2024-11-18 18:07:11.322018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.333875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.334012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.341875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.342025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.345010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.986 [2024-11-18 18:07:11.353916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.354241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.365896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.366074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.377915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.378221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.389902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.390078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.400192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.986 [2024-11-18 18:07:11.401904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.402073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.413918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.414159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.425951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.426257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.437937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.438274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.449952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.450302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.462073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.462267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.473969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.474192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.486003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.486227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.498004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.498238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.510004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.510193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.522035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.522242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 Running I/O for 5 seconds... 00:08:52.986 [2024-11-18 18:07:11.538096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.538292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.554744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.554927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.986 [2024-11-18 18:07:11.570443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.986 [2024-11-18 18:07:11.570652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.589102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.589289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.603548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.603733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.618504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.618707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.629886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.629924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.646420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.646454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.662513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.662572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.680805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.680839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.695864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.695899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.707405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.707439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.723440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.723474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.739435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.739469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.756158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.756191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.773859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.773894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.783733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.783769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.797209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.797378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.812853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.812890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.824139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.824320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.246 [2024-11-18 18:07:11.840882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.246 [2024-11-18 18:07:11.840916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.855865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.855913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.871453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.871487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.888389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.888422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.905291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.905325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.921287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.921320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.938585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.938633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.956001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.956034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.971180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.971214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.982227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.982411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:11.998247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:11.998282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.015248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.015280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.031478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.031511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.048282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.048315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.064276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.064309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.081746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.081780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.505 [2024-11-18 18:07:12.097307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.505 [2024-11-18 18:07:12.097490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.115384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.115576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.131406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.131613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.147792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.147826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.164695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.164728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.181984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.182236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.196525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.196605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.213543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.213604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.229439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.229472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.248316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.248348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.262186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.262219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.277879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.277917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.296309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.296341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.310925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.311160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.321090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.321124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.336358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.336394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.765 [2024-11-18 18:07:12.353281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.765 [2024-11-18 18:07:12.353314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.370810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.370863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.385445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.385498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.396900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.397176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.413673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.413758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.430509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.430586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.447632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.447682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.464798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.464833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.481440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.481474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.499358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.499576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.514462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.514658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.532130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.532164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.547921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.547954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.564717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.564749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.581332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.581364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.597309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.597343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.025 [2024-11-18 18:07:12.615010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.025 [2024-11-18 18:07:12.615041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.632202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.632234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.648747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.648779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.664796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.664829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.682721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.682752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.697188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.697221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.712257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.712290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.729940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.730146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.747487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.747520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.763903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.764113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.780429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.780464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.796879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.796929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.284 [2024-11-18 18:07:12.814191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.284 [2024-11-18 18:07:12.814369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.285 [2024-11-18 18:07:12.829890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.285 [2024-11-18 18:07:12.829926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.285 [2024-11-18 18:07:12.845443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.285 [2024-11-18 18:07:12.845623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.285 [2024-11-18 18:07:12.854252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.285 [2024-11-18 18:07:12.854290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.285 [2024-11-18 18:07:12.866989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.285 [2024-11-18 18:07:12.867024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.285 [2024-11-18 18:07:12.885004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.285 [2024-11-18 18:07:12.885042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.899764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.899950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.909393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.909426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.925809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.925844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.942766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.942947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.959033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.959066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.976746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.976778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:12.993614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:12.993647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.010092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.010153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.026627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.026683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.043538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.043606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.060562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.060606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.077440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.077473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.093883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.093918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.111005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.111038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.127991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.128195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.544 [2024-11-18 18:07:13.145181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.544 [2024-11-18 18:07:13.145360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.160032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.160226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.175319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.175497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.193866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.194034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.207839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.208037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.223519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.223736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.240516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.240719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.258507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.258724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.273637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.273840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.291485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.291721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.307480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.307685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.325576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.325786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.340024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.340209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.355401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.355607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.366458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.366695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.383313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.383637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.804 [2024-11-18 18:07:13.400228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.804 [2024-11-18 18:07:13.400408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.415872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.416066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.427194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.427370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.443246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.443422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.459095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.459272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.475038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.475215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.492835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.493011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.507453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.507640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.524018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.524212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.541551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.541777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.556939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.557114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.568223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.568399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.583865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.584041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.600180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.600358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.617979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.618013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.634574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.634656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.063 [2024-11-18 18:07:13.651732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.063 [2024-11-18 18:07:13.651764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.322 [2024-11-18 18:07:13.668948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.322 [2024-11-18 18:07:13.668981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.322 [2024-11-18 18:07:13.685374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.322 [2024-11-18 18:07:13.685407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.322 [2024-11-18 18:07:13.702394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.702583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.717247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.717280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.733203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.733237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.749029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.749061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.760487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.760519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.776357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.776390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.793812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.794110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.809334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.809600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.826373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.826410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.843044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.843076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.860431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.860464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.877181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.877214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.894426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.894459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.323 [2024-11-18 18:07:13.912306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.323 [2024-11-18 18:07:13.912386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:13.927640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:13.927826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:13.944506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:13.944567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:13.961870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:13.961907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:13.977864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:13.977902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:13.994675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:13.994707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.012351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.012384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.028558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.028621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.044976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.045011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.061176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.061208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.078615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.078647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.093441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.093473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.109836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.110001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.125747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.125781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.143420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.143452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.159203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.159235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.584 [2024-11-18 18:07:14.176547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.584 [2024-11-18 18:07:14.176591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.193038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.193070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.209783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.209832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.226352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.226385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.243771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.243805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.259919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.259968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.277154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.277186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.294768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.294801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.310766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.310797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.328495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.328564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.343938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.344000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.362720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.362757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.377552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.377633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.395904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.395938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.411180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.411213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.422003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.422052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.845 [2024-11-18 18:07:14.437243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.845 [2024-11-18 18:07:14.437431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.454828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.454861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.471913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.472079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.487472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.487675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.505683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.505766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.521169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.521203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.538475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.538507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.555513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.555573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.571198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.571231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.587850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.587886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.605150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.605189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.619802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.619836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.636484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.636518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.652146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.652178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.669768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.669805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.685963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.686006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.106 [2024-11-18 18:07:14.702572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.106 [2024-11-18 18:07:14.702664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.719187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.719221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.736500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.736559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.753835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.754154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.771424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.771736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.781190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.781313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.790754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.790924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.804719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.804821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.819553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.365 [2024-11-18 18:07:14.819664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.365 [2024-11-18 18:07:14.836035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.836171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.853185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.853307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.870358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.870479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.885576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.885798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.903385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.903619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.919198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.919319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.936869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.936975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.366 [2024-11-18 18:07:14.953117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.366 [2024-11-18 18:07:14.953237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:14.970684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:14.970859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:14.986646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:14.986767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.004422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.004618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.018902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.019054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.035673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.035815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.051443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.051636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.060512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.060645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.077149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.077271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.095303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.095350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.110553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.110615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.121450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.121499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.137179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.137226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.154793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.154840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.625 [2024-11-18 18:07:15.171593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.625 [2024-11-18 18:07:15.171640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.626 [2024-11-18 18:07:15.188701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.626 [2024-11-18 18:07:15.188750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.626 [2024-11-18 18:07:15.204856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.626 [2024-11-18 18:07:15.204904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.626 [2024-11-18 18:07:15.222708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.626 [2024-11-18 18:07:15.222743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.237273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.237322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.252133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.252180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.263250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.263298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.278919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.278967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.289691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.289764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.304833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.304881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.321552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.321614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.337930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.337967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.353860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.353913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.371520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.371594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.387728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.387777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.403319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.403366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.412267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.412314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.428254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.428303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.440926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.440977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.457098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.457145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.885 [2024-11-18 18:07:15.474843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.885 [2024-11-18 18:07:15.474876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.488844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.488891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.504890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.504938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.521684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.521771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.538382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.538429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.554828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.554863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.571746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.571794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.588697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.588747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.606460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.606509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.621572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.621633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.639464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.639512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.654593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.654638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.665799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.665831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.681806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.681856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.698072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.698104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.715669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.715720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.145 [2024-11-18 18:07:15.731254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.145 [2024-11-18 18:07:15.731308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.750089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.750166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.763636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.763696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.778522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.778596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.789732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.789763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.805492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.805563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.821977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.822059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.839607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.839656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.855849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.855896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.871886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.871933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.889688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.889757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.906662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.906706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.923290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.923344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.940176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.940223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.957984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.958018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.973151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.973198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:15.990774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:15.990815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.405 [2024-11-18 18:07:16.006326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.405 [2024-11-18 18:07:16.006373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.022502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.022561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.039751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.039786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.057080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.057126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.072461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.072508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.090555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.090630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.665 [2024-11-18 18:07:16.105214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.665 [2024-11-18 18:07:16.105262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.121655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.121682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.138269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.138318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.153216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.153263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.169230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.169277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.186516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.186571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.203224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.203271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.219191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.219239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.237478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.237527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.666 [2024-11-18 18:07:16.253170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.666 [2024-11-18 18:07:16.253218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.270611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.270708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.286615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.286690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.305283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.305332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.319272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.319318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.336168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.336243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.351482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.351566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.362425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.362472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.377621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.377666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.395228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.395276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.411799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.411849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.427782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.427813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.444808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.444856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.461497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.461570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.477996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.478045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.494928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.494961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.925 [2024-11-18 18:07:16.511509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.925 [2024-11-18 18:07:16.511583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.527834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.527871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 00:08:58.184 Latency(us) 00:08:58.184 [2024-11-18T18:07:16.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.184 [2024-11-18T18:07:16.788Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:58.184 Nvme1n1 : 5.01 13195.58 103.09 0.00 0.00 9690.32 2189.50 18350.08 00:08:58.184 [2024-11-18T18:07:16.788Z] =================================================================================================================== 00:08:58.184 [2024-11-18T18:07:16.788Z] Total : 13195.58 103.09 0.00 0.00 9690.32 2189.50 18350.08 00:08:58.184 [2024-11-18 18:07:16.539099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.539147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.551094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.551142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.563137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.563191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.575190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.575248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.587182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.587233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.599146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.599199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.611150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.611201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.184 [2024-11-18 18:07:16.623139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.184 [2024-11-18 18:07:16.623187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.635120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.635160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.647127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.647170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.659183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.659249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.671173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.671224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.683171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.683220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.695151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.695190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.707175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.707222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 [2024-11-18 18:07:16.719156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.185 [2024-11-18 18:07:16.719195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.185 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62845) - No such process 00:08:58.185 18:07:16 -- target/zcopy.sh@49 -- # wait 62845 00:08:58.185 18:07:16 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.185 18:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 18:07:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 18:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 18:07:16 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:58.185 18:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 18:07:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 delay0 00:08:58.185 18:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 18:07:16 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:58.185 18:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.185 18:07:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.185 18:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.185 18:07:16 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:58.444 [2024-11-18 18:07:16.912822] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:05.014 Initializing NVMe Controllers 00:09:05.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:05.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:05.014 Initialization complete. Launching workers. 00:09:05.014 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 73 00:09:05.014 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 360, failed to submit 33 00:09:05.014 success 241, unsuccess 119, failed 0 00:09:05.014 18:07:22 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:05.014 18:07:22 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:05.014 18:07:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:05.014 18:07:22 -- nvmf/common.sh@116 -- # sync 00:09:05.014 18:07:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:05.014 18:07:23 -- nvmf/common.sh@119 -- # set +e 00:09:05.014 18:07:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:05.014 18:07:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:05.014 rmmod nvme_tcp 00:09:05.014 rmmod nvme_fabrics 00:09:05.014 rmmod nvme_keyring 00:09:05.014 18:07:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:05.014 18:07:23 -- nvmf/common.sh@123 -- # set -e 00:09:05.014 18:07:23 -- nvmf/common.sh@124 -- # return 0 00:09:05.014 18:07:23 -- nvmf/common.sh@477 -- # '[' -n 62695 ']' 00:09:05.014 18:07:23 -- nvmf/common.sh@478 -- # killprocess 62695 00:09:05.014 18:07:23 -- common/autotest_common.sh@936 -- # '[' -z 62695 ']' 00:09:05.014 18:07:23 -- common/autotest_common.sh@940 -- # kill -0 62695 00:09:05.014 18:07:23 -- common/autotest_common.sh@941 -- # uname 00:09:05.014 18:07:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:05.014 18:07:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62695 00:09:05.014 killing process with pid 62695 00:09:05.014 18:07:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:05.014 18:07:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:05.014 18:07:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62695' 00:09:05.014 18:07:23 -- common/autotest_common.sh@955 -- # kill 62695 00:09:05.014 18:07:23 -- common/autotest_common.sh@960 -- # wait 62695 00:09:05.014 18:07:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:05.014 18:07:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:05.014 18:07:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:05.014 18:07:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.014 18:07:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:05.014 18:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.014 18:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.014 18:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.014 18:07:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:05.014 00:09:05.014 real 0m24.416s 00:09:05.014 user 0m40.292s 00:09:05.014 sys 0m6.353s 00:09:05.014 18:07:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:05.014 ************************************ 00:09:05.014 END TEST nvmf_zcopy 00:09:05.014 ************************************ 00:09:05.014 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.014 18:07:23 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.014 18:07:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:05.014 18:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.015 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.015 ************************************ 00:09:05.015 START TEST nvmf_nmic 00:09:05.015 ************************************ 00:09:05.015 18:07:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.015 * Looking for test storage... 00:09:05.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.015 18:07:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:05.015 18:07:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:05.015 18:07:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:05.015 18:07:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:05.015 18:07:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:05.015 18:07:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:05.015 18:07:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:05.015 18:07:23 -- scripts/common.sh@335 -- # IFS=.-: 00:09:05.015 18:07:23 -- scripts/common.sh@335 -- # read -ra ver1 00:09:05.015 18:07:23 -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.015 18:07:23 -- scripts/common.sh@336 -- # read -ra ver2 00:09:05.015 18:07:23 -- scripts/common.sh@337 -- # local 'op=<' 00:09:05.015 18:07:23 -- scripts/common.sh@339 -- # ver1_l=2 00:09:05.015 18:07:23 -- scripts/common.sh@340 -- # ver2_l=1 00:09:05.015 18:07:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:05.015 18:07:23 -- scripts/common.sh@343 -- # case "$op" in 00:09:05.015 18:07:23 -- scripts/common.sh@344 -- # : 1 00:09:05.015 18:07:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:05.015 18:07:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.015 18:07:23 -- scripts/common.sh@364 -- # decimal 1 00:09:05.015 18:07:23 -- scripts/common.sh@352 -- # local d=1 00:09:05.015 18:07:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.015 18:07:23 -- scripts/common.sh@354 -- # echo 1 00:09:05.015 18:07:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:05.015 18:07:23 -- scripts/common.sh@365 -- # decimal 2 00:09:05.015 18:07:23 -- scripts/common.sh@352 -- # local d=2 00:09:05.015 18:07:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.015 18:07:23 -- scripts/common.sh@354 -- # echo 2 00:09:05.015 18:07:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:05.015 18:07:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:05.015 18:07:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:05.015 18:07:23 -- scripts/common.sh@367 -- # return 0 00:09:05.015 18:07:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.015 18:07:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.015 --rc genhtml_branch_coverage=1 00:09:05.015 --rc genhtml_function_coverage=1 00:09:05.015 --rc genhtml_legend=1 00:09:05.015 --rc geninfo_all_blocks=1 00:09:05.015 --rc geninfo_unexecuted_blocks=1 00:09:05.015 00:09:05.015 ' 00:09:05.015 18:07:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.015 --rc genhtml_branch_coverage=1 00:09:05.015 --rc genhtml_function_coverage=1 00:09:05.015 --rc genhtml_legend=1 00:09:05.015 --rc geninfo_all_blocks=1 00:09:05.015 --rc geninfo_unexecuted_blocks=1 00:09:05.015 00:09:05.015 ' 00:09:05.015 18:07:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.015 --rc genhtml_branch_coverage=1 00:09:05.015 --rc genhtml_function_coverage=1 00:09:05.015 --rc genhtml_legend=1 00:09:05.015 --rc geninfo_all_blocks=1 00:09:05.015 --rc geninfo_unexecuted_blocks=1 00:09:05.015 00:09:05.015 ' 00:09:05.015 18:07:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:05.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.015 --rc genhtml_branch_coverage=1 00:09:05.015 --rc genhtml_function_coverage=1 00:09:05.015 --rc genhtml_legend=1 00:09:05.015 --rc geninfo_all_blocks=1 00:09:05.015 --rc geninfo_unexecuted_blocks=1 00:09:05.015 00:09:05.015 ' 00:09:05.015 18:07:23 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.015 18:07:23 -- nvmf/common.sh@7 -- # uname -s 00:09:05.015 18:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.015 18:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.015 18:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.015 18:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.015 18:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.015 18:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.015 18:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.015 18:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.015 18:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.015 18:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:05.015 18:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:05.015 18:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.015 18:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.015 18:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.015 18:07:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.015 18:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.015 18:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.015 18:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.015 18:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.015 18:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.015 18:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.015 18:07:23 -- paths/export.sh@5 -- # export PATH 00:09:05.015 18:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.015 18:07:23 -- nvmf/common.sh@46 -- # : 0 00:09:05.015 18:07:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:05.015 18:07:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:05.015 18:07:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:05.015 18:07:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.015 18:07:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.015 18:07:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:05.015 18:07:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:05.015 18:07:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:05.015 18:07:23 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.015 18:07:23 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.015 18:07:23 -- target/nmic.sh@14 -- # nvmftestinit 00:09:05.015 18:07:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:05.015 18:07:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.015 18:07:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:05.015 18:07:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:05.015 18:07:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:05.015 18:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.015 18:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.015 18:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.015 18:07:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:05.015 18:07:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:05.015 18:07:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.015 18:07:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.015 18:07:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:05.015 18:07:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:05.015 18:07:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.015 18:07:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.015 18:07:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.015 18:07:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.015 18:07:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.015 18:07:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.015 18:07:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.015 18:07:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.015 18:07:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:05.274 18:07:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:05.274 Cannot find device "nvmf_tgt_br" 00:09:05.274 18:07:23 -- nvmf/common.sh@154 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.274 Cannot find device "nvmf_tgt_br2" 00:09:05.274 18:07:23 -- nvmf/common.sh@155 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:05.274 18:07:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:05.274 Cannot find device "nvmf_tgt_br" 00:09:05.274 18:07:23 -- nvmf/common.sh@157 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:05.274 Cannot find device "nvmf_tgt_br2" 00:09:05.274 18:07:23 -- nvmf/common.sh@158 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:05.274 18:07:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:05.274 18:07:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.274 18:07:23 -- nvmf/common.sh@161 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.274 18:07:23 -- nvmf/common.sh@162 -- # true 00:09:05.274 18:07:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.274 18:07:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.275 18:07:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.275 18:07:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.275 18:07:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.275 18:07:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.275 18:07:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.275 18:07:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:05.275 18:07:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:05.275 18:07:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:05.275 18:07:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:05.275 18:07:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:05.275 18:07:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:05.275 18:07:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.275 18:07:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.275 18:07:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.275 18:07:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:05.275 18:07:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:05.275 18:07:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.275 18:07:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.275 18:07:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.534 18:07:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.534 18:07:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.534 18:07:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:05.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:05.534 00:09:05.534 --- 10.0.0.2 ping statistics --- 00:09:05.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.534 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:05.534 18:07:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:05.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:05.534 00:09:05.534 --- 10.0.0.3 ping statistics --- 00:09:05.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.534 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:05.534 18:07:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:05.534 00:09:05.534 --- 10.0.0.1 ping statistics --- 00:09:05.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.534 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:05.534 18:07:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.534 18:07:23 -- nvmf/common.sh@421 -- # return 0 00:09:05.534 18:07:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:05.534 18:07:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.534 18:07:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:05.534 18:07:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:05.534 18:07:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.534 18:07:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:05.534 18:07:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:05.534 18:07:23 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:05.534 18:07:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:05.534 18:07:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.534 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 18:07:23 -- nvmf/common.sh@469 -- # nvmfpid=63177 00:09:05.534 18:07:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.534 18:07:23 -- nvmf/common.sh@470 -- # waitforlisten 63177 00:09:05.534 18:07:23 -- common/autotest_common.sh@829 -- # '[' -z 63177 ']' 00:09:05.534 18:07:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.534 18:07:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.534 18:07:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.534 18:07:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.534 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:09:05.534 [2024-11-18 18:07:23.990666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:05.534 [2024-11-18 18:07:23.991226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.534 [2024-11-18 18:07:24.134620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.794 [2024-11-18 18:07:24.207034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:05.794 [2024-11-18 18:07:24.207222] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.794 [2024-11-18 18:07:24.207238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.794 [2024-11-18 18:07:24.207248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.794 [2024-11-18 18:07:24.207436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.794 [2024-11-18 18:07:24.207595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.794 [2024-11-18 18:07:24.208210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.794 [2024-11-18 18:07:24.208246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.728 18:07:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.728 18:07:25 -- common/autotest_common.sh@862 -- # return 0 00:09:06.728 18:07:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:06.729 18:07:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 18:07:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.729 18:07:25 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 [2024-11-18 18:07:25.050202] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 Malloc0 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 [2024-11-18 18:07:25.104940] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 test case1: single bdev can't be used in multiple subsystems 00:09:06.729 18:07:25 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:06.729 18:07:25 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@28 -- # nmic_status=0 00:09:06.729 18:07:25 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 [2024-11-18 18:07:25.128749] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:06.729 [2024-11-18 18:07:25.128787] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:06.729 [2024-11-18 18:07:25.128813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.729 request: 00:09:06.729 { 00:09:06.729 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:06.729 "namespace": { 00:09:06.729 "bdev_name": "Malloc0" 00:09:06.729 }, 00:09:06.729 "method": "nvmf_subsystem_add_ns", 00:09:06.729 "req_id": 1 00:09:06.729 } 00:09:06.729 Got JSON-RPC error response 00:09:06.729 response: 00:09:06.729 { 00:09:06.729 "code": -32602, 00:09:06.729 "message": "Invalid parameters" 00:09:06.729 } 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@29 -- # nmic_status=1 00:09:06.729 18:07:25 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:06.729 Adding namespace failed - expected result. 00:09:06.729 18:07:25 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:06.729 test case2: host connect to nvmf target in multiple paths 00:09:06.729 18:07:25 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:06.729 18:07:25 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:06.729 18:07:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.729 18:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:06.729 [2024-11-18 18:07:25.140862] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:06.729 18:07:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.729 18:07:25 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.729 18:07:25 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:06.988 18:07:25 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.988 18:07:25 -- common/autotest_common.sh@1187 -- # local i=0 00:09:06.988 18:07:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.988 18:07:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:06.988 18:07:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:08.891 18:07:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:08.891 18:07:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:08.891 18:07:27 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.891 18:07:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:08.891 18:07:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.891 18:07:27 -- common/autotest_common.sh@1197 -- # return 0 00:09:08.891 18:07:27 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:08.891 [global] 00:09:08.891 thread=1 00:09:08.891 invalidate=1 00:09:08.891 rw=write 00:09:08.891 time_based=1 00:09:08.891 runtime=1 00:09:08.891 ioengine=libaio 00:09:08.891 direct=1 00:09:08.891 bs=4096 00:09:08.891 iodepth=1 00:09:08.891 norandommap=0 00:09:08.891 numjobs=1 00:09:08.891 00:09:08.891 verify_dump=1 00:09:08.891 verify_backlog=512 00:09:08.891 verify_state_save=0 00:09:08.891 do_verify=1 00:09:08.891 verify=crc32c-intel 00:09:08.891 [job0] 00:09:08.891 filename=/dev/nvme0n1 00:09:08.891 Could not set queue depth (nvme0n1) 00:09:09.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.149 fio-3.35 00:09:09.149 Starting 1 thread 00:09:10.525 00:09:10.525 job0: (groupid=0, jobs=1): err= 0: pid=63263: Mon Nov 18 18:07:28 2024 00:09:10.525 read: IOPS=3066, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:10.525 slat (nsec): min=10802, max=55869, avg=13552.53, stdev=4603.94 00:09:10.525 clat (usec): min=130, max=542, avg=178.46, stdev=24.45 00:09:10.525 lat (usec): min=143, max=570, avg=192.02, stdev=25.10 00:09:10.525 clat percentiles (usec): 00:09:10.525 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:09:10.525 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:09:10.525 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 221], 00:09:10.525 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 461], 00:09:10.525 | 99.99th=[ 545] 00:09:10.525 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:10.525 slat (usec): min=14, max=158, avg=22.15, stdev= 8.27 00:09:10.525 clat (usec): min=2, max=275, avg=108.14, stdev=18.37 00:09:10.525 lat (usec): min=97, max=294, avg=130.29, stdev=20.72 00:09:10.525 clat percentiles (usec): 00:09:10.525 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 94], 00:09:10.525 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 108], 00:09:10.525 | 70.00th=[ 115], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 145], 00:09:10.525 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 204], 99.95th=[ 225], 00:09:10.525 | 99.99th=[ 277] 00:09:10.525 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:10.525 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:10.525 lat (usec) : 4=0.02%, 50=0.02%, 100=19.57%, 250=80.17%, 500=0.21% 00:09:10.525 lat (usec) : 750=0.02% 00:09:10.525 cpu : usr=2.60%, sys=8.10%, ctx=6154, majf=0, minf=5 00:09:10.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.525 issued rwts: total=3070,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.525 00:09:10.525 Run status group 0 (all jobs): 00:09:10.525 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:10.526 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:10.526 00:09:10.526 Disk stats (read/write): 00:09:10.526 nvme0n1: ios=2610/3059, merge=0/0, ticks=495/381, in_queue=876, util=91.48% 00:09:10.526 18:07:28 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:10.526 18:07:28 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.526 18:07:28 -- common/autotest_common.sh@1208 -- # local i=0 00:09:10.526 18:07:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:10.526 18:07:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.526 18:07:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.526 18:07:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:10.526 18:07:28 -- common/autotest_common.sh@1220 -- # return 0 00:09:10.526 18:07:28 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:10.526 18:07:28 -- target/nmic.sh@53 -- # nvmftestfini 00:09:10.526 18:07:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:10.526 18:07:28 -- nvmf/common.sh@116 -- # sync 00:09:10.526 18:07:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:10.526 18:07:28 -- nvmf/common.sh@119 -- # set +e 00:09:10.526 18:07:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:10.526 18:07:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:10.526 rmmod nvme_tcp 00:09:10.526 rmmod nvme_fabrics 00:09:10.526 rmmod nvme_keyring 00:09:10.526 18:07:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:10.526 18:07:28 -- nvmf/common.sh@123 -- # set -e 00:09:10.526 18:07:28 -- nvmf/common.sh@124 -- # return 0 00:09:10.526 18:07:28 -- nvmf/common.sh@477 -- # '[' -n 63177 ']' 00:09:10.526 18:07:28 -- nvmf/common.sh@478 -- # killprocess 63177 00:09:10.526 18:07:28 -- common/autotest_common.sh@936 -- # '[' -z 63177 ']' 00:09:10.526 18:07:28 -- common/autotest_common.sh@940 -- # kill -0 63177 00:09:10.526 18:07:28 -- common/autotest_common.sh@941 -- # uname 00:09:10.526 18:07:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.526 18:07:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63177 00:09:10.526 killing process with pid 63177 00:09:10.526 18:07:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:10.526 18:07:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:10.526 18:07:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63177' 00:09:10.526 18:07:28 -- common/autotest_common.sh@955 -- # kill 63177 00:09:10.526 18:07:28 -- common/autotest_common.sh@960 -- # wait 63177 00:09:10.785 18:07:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:10.785 18:07:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:10.785 18:07:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:10.785 18:07:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.785 18:07:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:10.785 18:07:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.785 18:07:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.785 18:07:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.785 18:07:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:10.785 ************************************ 00:09:10.785 END TEST nvmf_nmic 00:09:10.785 ************************************ 00:09:10.785 00:09:10.785 real 0m5.781s 00:09:10.785 user 0m18.584s 00:09:10.785 sys 0m2.156s 00:09:10.785 18:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.785 18:07:29 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 18:07:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.785 18:07:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:10.785 18:07:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.785 18:07:29 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 ************************************ 00:09:10.785 START TEST nvmf_fio_target 00:09:10.785 ************************************ 00:09:10.785 18:07:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:10.785 * Looking for test storage... 00:09:10.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:10.785 18:07:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:10.785 18:07:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:10.785 18:07:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:10.785 18:07:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:10.785 18:07:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:10.785 18:07:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:10.785 18:07:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:10.785 18:07:29 -- scripts/common.sh@335 -- # IFS=.-: 00:09:10.785 18:07:29 -- scripts/common.sh@335 -- # read -ra ver1 00:09:10.786 18:07:29 -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.786 18:07:29 -- scripts/common.sh@336 -- # read -ra ver2 00:09:10.786 18:07:29 -- scripts/common.sh@337 -- # local 'op=<' 00:09:10.786 18:07:29 -- scripts/common.sh@339 -- # ver1_l=2 00:09:10.786 18:07:29 -- scripts/common.sh@340 -- # ver2_l=1 00:09:10.786 18:07:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:10.786 18:07:29 -- scripts/common.sh@343 -- # case "$op" in 00:09:10.786 18:07:29 -- scripts/common.sh@344 -- # : 1 00:09:10.786 18:07:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:10.786 18:07:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.045 18:07:29 -- scripts/common.sh@364 -- # decimal 1 00:09:11.045 18:07:29 -- scripts/common.sh@352 -- # local d=1 00:09:11.045 18:07:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.045 18:07:29 -- scripts/common.sh@354 -- # echo 1 00:09:11.045 18:07:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:11.045 18:07:29 -- scripts/common.sh@365 -- # decimal 2 00:09:11.045 18:07:29 -- scripts/common.sh@352 -- # local d=2 00:09:11.045 18:07:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.045 18:07:29 -- scripts/common.sh@354 -- # echo 2 00:09:11.045 18:07:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:11.045 18:07:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:11.045 18:07:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:11.045 18:07:29 -- scripts/common.sh@367 -- # return 0 00:09:11.045 18:07:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.045 18:07:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.045 --rc genhtml_branch_coverage=1 00:09:11.045 --rc genhtml_function_coverage=1 00:09:11.045 --rc genhtml_legend=1 00:09:11.045 --rc geninfo_all_blocks=1 00:09:11.045 --rc geninfo_unexecuted_blocks=1 00:09:11.045 00:09:11.045 ' 00:09:11.045 18:07:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.045 --rc genhtml_branch_coverage=1 00:09:11.045 --rc genhtml_function_coverage=1 00:09:11.045 --rc genhtml_legend=1 00:09:11.045 --rc geninfo_all_blocks=1 00:09:11.045 --rc geninfo_unexecuted_blocks=1 00:09:11.045 00:09:11.045 ' 00:09:11.045 18:07:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.045 --rc genhtml_branch_coverage=1 00:09:11.045 --rc genhtml_function_coverage=1 00:09:11.045 --rc genhtml_legend=1 00:09:11.045 --rc geninfo_all_blocks=1 00:09:11.045 --rc geninfo_unexecuted_blocks=1 00:09:11.045 00:09:11.045 ' 00:09:11.045 18:07:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.045 --rc genhtml_branch_coverage=1 00:09:11.045 --rc genhtml_function_coverage=1 00:09:11.045 --rc genhtml_legend=1 00:09:11.045 --rc geninfo_all_blocks=1 00:09:11.045 --rc geninfo_unexecuted_blocks=1 00:09:11.045 00:09:11.045 ' 00:09:11.045 18:07:29 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.045 18:07:29 -- nvmf/common.sh@7 -- # uname -s 00:09:11.045 18:07:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.045 18:07:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.045 18:07:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.045 18:07:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.045 18:07:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.045 18:07:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.045 18:07:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.045 18:07:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.045 18:07:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.045 18:07:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:11.045 18:07:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:11.045 18:07:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.045 18:07:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.045 18:07:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.045 18:07:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.045 18:07:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.045 18:07:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.045 18:07:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.045 18:07:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.045 18:07:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.045 18:07:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.045 18:07:29 -- paths/export.sh@5 -- # export PATH 00:09:11.045 18:07:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.045 18:07:29 -- nvmf/common.sh@46 -- # : 0 00:09:11.045 18:07:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:11.045 18:07:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:11.045 18:07:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:11.045 18:07:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.045 18:07:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.045 18:07:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:11.045 18:07:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:11.045 18:07:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:11.045 18:07:29 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.045 18:07:29 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.045 18:07:29 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.045 18:07:29 -- target/fio.sh@16 -- # nvmftestinit 00:09:11.045 18:07:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:11.045 18:07:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.045 18:07:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:11.045 18:07:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:11.045 18:07:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:11.045 18:07:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.045 18:07:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.045 18:07:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.045 18:07:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:11.045 18:07:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:11.045 18:07:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.045 18:07:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.045 18:07:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:11.045 18:07:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:11.045 18:07:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.045 18:07:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.045 18:07:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.045 18:07:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.045 18:07:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.045 18:07:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.045 18:07:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.045 18:07:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.045 18:07:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:11.045 18:07:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:11.045 Cannot find device "nvmf_tgt_br" 00:09:11.045 18:07:29 -- nvmf/common.sh@154 -- # true 00:09:11.045 18:07:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.045 Cannot find device "nvmf_tgt_br2" 00:09:11.046 18:07:29 -- nvmf/common.sh@155 -- # true 00:09:11.046 18:07:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:11.046 18:07:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:11.046 Cannot find device "nvmf_tgt_br" 00:09:11.046 18:07:29 -- nvmf/common.sh@157 -- # true 00:09:11.046 18:07:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:11.046 Cannot find device "nvmf_tgt_br2" 00:09:11.046 18:07:29 -- nvmf/common.sh@158 -- # true 00:09:11.046 18:07:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:11.046 18:07:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:11.046 18:07:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.046 18:07:29 -- nvmf/common.sh@161 -- # true 00:09:11.046 18:07:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.046 18:07:29 -- nvmf/common.sh@162 -- # true 00:09:11.046 18:07:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.046 18:07:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.046 18:07:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.046 18:07:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.046 18:07:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.305 18:07:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.305 18:07:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.305 18:07:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:11.305 18:07:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:11.305 18:07:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:11.305 18:07:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:11.305 18:07:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:11.305 18:07:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:11.305 18:07:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.305 18:07:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.305 18:07:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.305 18:07:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:11.305 18:07:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:11.305 18:07:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.305 18:07:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.305 18:07:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.305 18:07:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.305 18:07:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.305 18:07:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:11.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:11.305 00:09:11.305 --- 10.0.0.2 ping statistics --- 00:09:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.305 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:11.305 18:07:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:11.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:11.305 00:09:11.305 --- 10.0.0.3 ping statistics --- 00:09:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.305 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:11.305 18:07:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:11.305 00:09:11.305 --- 10.0.0.1 ping statistics --- 00:09:11.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.305 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:11.305 18:07:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.305 18:07:29 -- nvmf/common.sh@421 -- # return 0 00:09:11.305 18:07:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:11.305 18:07:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.305 18:07:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:11.305 18:07:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:11.305 18:07:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.305 18:07:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:11.305 18:07:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:11.305 18:07:29 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:11.305 18:07:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:11.305 18:07:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.305 18:07:29 -- common/autotest_common.sh@10 -- # set +x 00:09:11.305 18:07:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.305 18:07:29 -- nvmf/common.sh@469 -- # nvmfpid=63453 00:09:11.305 18:07:29 -- nvmf/common.sh@470 -- # waitforlisten 63453 00:09:11.305 18:07:29 -- common/autotest_common.sh@829 -- # '[' -z 63453 ']' 00:09:11.305 18:07:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.305 18:07:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.305 18:07:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.305 18:07:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.305 18:07:29 -- common/autotest_common.sh@10 -- # set +x 00:09:11.305 [2024-11-18 18:07:29.879956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.305 [2024-11-18 18:07:29.880069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.565 [2024-11-18 18:07:30.023018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.565 [2024-11-18 18:07:30.079362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:11.565 [2024-11-18 18:07:30.079525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.565 [2024-11-18 18:07:30.079538] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.565 [2024-11-18 18:07:30.079559] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.565 [2024-11-18 18:07:30.079673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.565 [2024-11-18 18:07:30.080030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.565 [2024-11-18 18:07:30.080476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.565 [2024-11-18 18:07:30.080511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.502 18:07:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.502 18:07:30 -- common/autotest_common.sh@862 -- # return 0 00:09:12.502 18:07:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:12.502 18:07:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.502 18:07:30 -- common/autotest_common.sh@10 -- # set +x 00:09:12.502 18:07:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.502 18:07:30 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.761 [2024-11-18 18:07:31.131198] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.761 18:07:31 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.021 18:07:31 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:13.021 18:07:31 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.280 18:07:31 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:13.280 18:07:31 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.539 18:07:31 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:13.539 18:07:31 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.798 18:07:32 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:13.798 18:07:32 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:14.057 18:07:32 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.316 18:07:32 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:14.316 18:07:32 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.575 18:07:33 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:14.575 18:07:33 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:14.834 18:07:33 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:14.834 18:07:33 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:15.093 18:07:33 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.352 18:07:33 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:15.352 18:07:33 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.612 18:07:33 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:15.612 18:07:33 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.612 18:07:34 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.871 [2024-11-18 18:07:34.444408] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.871 18:07:34 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:16.130 18:07:34 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:16.389 18:07:34 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.649 18:07:35 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:16.649 18:07:35 -- common/autotest_common.sh@1187 -- # local i=0 00:09:16.649 18:07:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.649 18:07:35 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:09:16.649 18:07:35 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:09:16.649 18:07:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:18.553 18:07:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:18.553 18:07:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:18.553 18:07:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.553 18:07:37 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:09:18.553 18:07:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.553 18:07:37 -- common/autotest_common.sh@1197 -- # return 0 00:09:18.553 18:07:37 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.553 [global] 00:09:18.553 thread=1 00:09:18.553 invalidate=1 00:09:18.553 rw=write 00:09:18.553 time_based=1 00:09:18.553 runtime=1 00:09:18.553 ioengine=libaio 00:09:18.553 direct=1 00:09:18.553 bs=4096 00:09:18.553 iodepth=1 00:09:18.553 norandommap=0 00:09:18.553 numjobs=1 00:09:18.553 00:09:18.553 verify_dump=1 00:09:18.553 verify_backlog=512 00:09:18.553 verify_state_save=0 00:09:18.553 do_verify=1 00:09:18.553 verify=crc32c-intel 00:09:18.553 [job0] 00:09:18.553 filename=/dev/nvme0n1 00:09:18.553 [job1] 00:09:18.553 filename=/dev/nvme0n2 00:09:18.553 [job2] 00:09:18.553 filename=/dev/nvme0n3 00:09:18.553 [job3] 00:09:18.553 filename=/dev/nvme0n4 00:09:18.812 Could not set queue depth (nvme0n1) 00:09:18.812 Could not set queue depth (nvme0n2) 00:09:18.812 Could not set queue depth (nvme0n3) 00:09:18.812 Could not set queue depth (nvme0n4) 00:09:18.812 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.812 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.812 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.812 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.812 fio-3.35 00:09:18.812 Starting 4 threads 00:09:20.191 00:09:20.191 job0: (groupid=0, jobs=1): err= 0: pid=63637: Mon Nov 18 18:07:38 2024 00:09:20.191 read: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:09:20.191 slat (nsec): min=10592, max=99930, avg=12862.43, stdev=2579.73 00:09:20.191 clat (usec): min=67, max=940, avg=163.00, stdev=18.50 00:09:20.191 lat (usec): min=141, max=951, avg=175.86, stdev=18.52 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:09:20.191 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:09:20.191 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 182], 00:09:20.191 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 408], 00:09:20.191 | 99.99th=[ 938] 00:09:20.191 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.191 slat (nsec): min=13918, max=88759, avg=20354.80, stdev=4389.22 00:09:20.191 clat (usec): min=92, max=1599, avg=128.47, stdev=28.38 00:09:20.191 lat (usec): min=110, max=1617, avg=148.83, stdev=28.78 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:09:20.191 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:09:20.191 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 145], 00:09:20.191 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 192], 00:09:20.191 | 99.99th=[ 1598] 00:09:20.191 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.191 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.191 lat (usec) : 100=0.08%, 250=99.87%, 500=0.02%, 1000=0.02% 00:09:20.191 lat (msec) : 2=0.02% 00:09:20.191 cpu : usr=2.10%, sys=8.00%, ctx=6105, majf=0, minf=9 00:09:20.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 issued rwts: total=3032,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.191 job1: (groupid=0, jobs=1): err= 0: pid=63638: Mon Nov 18 18:07:38 2024 00:09:20.191 read: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec) 00:09:20.191 slat (nsec): min=10614, max=29786, avg=12059.94, stdev=1377.09 00:09:20.191 clat (usec): min=134, max=493, avg=165.07, stdev=13.15 00:09:20.191 lat (usec): min=146, max=506, avg=177.13, stdev=13.24 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:20.191 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:09:20.191 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:09:20.191 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 215], 99.95th=[ 306], 00:09:20.191 | 99.99th=[ 494] 00:09:20.191 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.191 slat (nsec): min=13552, max=73519, avg=18673.79, stdev=2251.95 00:09:20.191 clat (usec): min=98, max=561, avg=130.73, stdev=14.21 00:09:20.191 lat (usec): min=115, max=580, avg=149.40, stdev=14.36 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:09:20.191 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:09:20.191 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:09:20.191 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 198], 99.95th=[ 449], 00:09:20.191 | 99.99th=[ 562] 00:09:20.191 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.191 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.191 lat (usec) : 100=0.07%, 250=99.87%, 500=0.05%, 750=0.02% 00:09:20.191 cpu : usr=2.30%, sys=7.00%, ctx=6077, majf=0, minf=13 00:09:20.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 issued rwts: total=3005,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.191 job2: (groupid=0, jobs=1): err= 0: pid=63639: Mon Nov 18 18:07:38 2024 00:09:20.191 read: IOPS=2686, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:09:20.191 slat (nsec): min=11510, max=35136, avg=13112.39, stdev=1692.40 00:09:20.191 clat (usec): min=140, max=1656, avg=175.73, stdev=35.55 00:09:20.191 lat (usec): min=153, max=1671, avg=188.84, stdev=35.66 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:09:20.191 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:09:20.191 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:09:20.191 | 99.00th=[ 210], 99.50th=[ 293], 99.90th=[ 603], 99.95th=[ 644], 00:09:20.191 | 99.99th=[ 1663] 00:09:20.191 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.191 slat (nsec): min=15435, max=63179, avg=20335.21, stdev=2797.11 00:09:20.191 clat (usec): min=104, max=442, avg=137.09, stdev=13.44 00:09:20.191 lat (usec): min=122, max=463, avg=157.43, stdev=13.81 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:09:20.191 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:09:20.191 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:09:20.191 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 289], 99.95th=[ 396], 00:09:20.191 | 99.99th=[ 445] 00:09:20.191 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.191 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.191 lat (usec) : 250=99.64%, 500=0.31%, 750=0.03% 00:09:20.191 lat (msec) : 2=0.02% 00:09:20.191 cpu : usr=1.70%, sys=7.90%, ctx=5763, majf=0, minf=11 00:09:20.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.191 job3: (groupid=0, jobs=1): err= 0: pid=63640: Mon Nov 18 18:07:38 2024 00:09:20.191 read: IOPS=2625, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:20.191 slat (nsec): min=10971, max=36213, avg=13778.60, stdev=2572.54 00:09:20.191 clat (usec): min=144, max=8065, avg=179.96, stdev=166.88 00:09:20.191 lat (usec): min=159, max=8077, avg=193.74, stdev=166.87 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:09:20.191 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:09:20.191 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:09:20.191 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 1598], 99.95th=[ 2802], 00:09:20.191 | 99.99th=[ 8094] 00:09:20.191 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.191 slat (nsec): min=17540, max=76061, avg=20530.62, stdev=3272.96 00:09:20.191 clat (usec): min=109, max=249, avg=136.19, stdev=10.66 00:09:20.191 lat (usec): min=130, max=325, avg=156.72, stdev=11.26 00:09:20.191 clat percentiles (usec): 00:09:20.191 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 128], 00:09:20.191 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:20.191 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:09:20.191 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 184], 00:09:20.191 | 99.99th=[ 249] 00:09:20.191 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.191 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.191 lat (usec) : 250=99.89%, 1000=0.05% 00:09:20.191 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:09:20.191 cpu : usr=2.20%, sys=7.60%, ctx=5700, majf=0, minf=5 00:09:20.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.191 issued rwts: total=2628,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.191 00:09:20.191 Run status group 0 (all jobs): 00:09:20.191 READ: bw=44.3MiB/s (46.5MB/s), 10.3MiB/s-11.8MiB/s (10.8MB/s-12.4MB/s), io=44.4MiB (46.5MB), run=1001-1001msec 00:09:20.191 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:09:20.191 00:09:20.191 Disk stats (read/write): 00:09:20.191 nvme0n1: ios=2609/2682, merge=0/0, ticks=453/369, in_queue=822, util=87.85% 00:09:20.191 nvme0n2: ios=2576/2661, merge=0/0, ticks=442/371, in_queue=813, util=88.06% 00:09:20.191 nvme0n3: ios=2374/2560, merge=0/0, ticks=423/372, in_queue=795, util=89.30% 00:09:20.191 nvme0n4: ios=2315/2560, merge=0/0, ticks=410/367, in_queue=777, util=89.14% 00:09:20.191 18:07:38 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:20.191 [global] 00:09:20.191 thread=1 00:09:20.191 invalidate=1 00:09:20.191 rw=randwrite 00:09:20.191 time_based=1 00:09:20.191 runtime=1 00:09:20.191 ioengine=libaio 00:09:20.191 direct=1 00:09:20.191 bs=4096 00:09:20.191 iodepth=1 00:09:20.191 norandommap=0 00:09:20.191 numjobs=1 00:09:20.191 00:09:20.191 verify_dump=1 00:09:20.191 verify_backlog=512 00:09:20.191 verify_state_save=0 00:09:20.191 do_verify=1 00:09:20.191 verify=crc32c-intel 00:09:20.191 [job0] 00:09:20.191 filename=/dev/nvme0n1 00:09:20.191 [job1] 00:09:20.191 filename=/dev/nvme0n2 00:09:20.191 [job2] 00:09:20.191 filename=/dev/nvme0n3 00:09:20.191 [job3] 00:09:20.191 filename=/dev/nvme0n4 00:09:20.191 Could not set queue depth (nvme0n1) 00:09:20.191 Could not set queue depth (nvme0n2) 00:09:20.192 Could not set queue depth (nvme0n3) 00:09:20.192 Could not set queue depth (nvme0n4) 00:09:20.192 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.192 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.192 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.192 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:20.192 fio-3.35 00:09:20.192 Starting 4 threads 00:09:21.587 00:09:21.587 job0: (groupid=0, jobs=1): err= 0: pid=63693: Mon Nov 18 18:07:39 2024 00:09:21.587 read: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:09:21.587 slat (nsec): min=10109, max=47265, avg=12171.40, stdev=2701.28 00:09:21.587 clat (usec): min=131, max=228, avg=165.42, stdev=13.34 00:09:21.587 lat (usec): min=142, max=238, avg=177.59, stdev=13.62 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:21.587 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:21.587 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:09:21.587 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 219], 99.95th=[ 229], 00:09:21.587 | 99.99th=[ 229] 00:09:21.587 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:21.587 slat (nsec): min=13716, max=87002, avg=20197.27, stdev=4885.56 00:09:21.587 clat (usec): min=96, max=7154, avg=136.53, stdev=162.46 00:09:21.587 lat (usec): min=113, max=7173, avg=156.73, stdev=162.56 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:09:21.587 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:09:21.587 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:09:21.587 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 1647], 99.95th=[ 4015], 00:09:21.587 | 99.99th=[ 7177] 00:09:21.587 bw ( KiB/s): min=12288, max=12288, per=24.86%, avg=12288.00, stdev= 0.00, samples=1 00:09:21.587 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:21.587 lat (usec) : 100=0.02%, 250=99.83%, 750=0.03%, 1000=0.02% 00:09:21.587 lat (msec) : 2=0.05%, 4=0.02%, 10=0.03% 00:09:21.587 cpu : usr=2.60%, sys=7.30%, ctx=5940, majf=0, minf=11 00:09:21.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 issued rwts: total=2868,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.587 job1: (groupid=0, jobs=1): err= 0: pid=63694: Mon Nov 18 18:07:39 2024 00:09:21.587 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:21.587 slat (nsec): min=10276, max=47450, avg=12174.12, stdev=2759.98 00:09:21.587 clat (usec): min=128, max=217, avg=160.97, stdev=13.42 00:09:21.587 lat (usec): min=139, max=239, avg=173.15, stdev=13.88 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:21.587 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:09:21.587 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:09:21.587 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 215], 00:09:21.587 | 99.99th=[ 219] 00:09:21.587 write: IOPS=3151, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:09:21.587 slat (usec): min=12, max=177, avg=18.83, stdev= 6.50 00:09:21.587 clat (usec): min=2, max=945, avg=126.55, stdev=21.26 00:09:21.587 lat (usec): min=110, max=973, avg=145.38, stdev=21.58 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 117], 00:09:21.587 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:09:21.587 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:09:21.587 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 273], 99.95th=[ 445], 00:09:21.587 | 99.99th=[ 947] 00:09:21.587 bw ( KiB/s): min=12544, max=12544, per=25.37%, avg=12544.00, stdev= 0.00, samples=1 00:09:21.587 iops : min= 3136, max= 3136, avg=3136.00, stdev= 0.00, samples=1 00:09:21.587 lat (usec) : 4=0.02%, 50=0.06%, 100=0.50%, 250=99.36%, 500=0.05% 00:09:21.587 lat (usec) : 1000=0.02% 00:09:21.587 cpu : usr=2.00%, sys=7.70%, ctx=6238, majf=0, minf=9 00:09:21.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 issued rwts: total=3072,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.587 job2: (groupid=0, jobs=1): err= 0: pid=63695: Mon Nov 18 18:07:39 2024 00:09:21.587 read: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:09:21.587 slat (nsec): min=10372, max=58042, avg=13969.32, stdev=4357.16 00:09:21.587 clat (usec): min=138, max=230, avg=171.47, stdev=14.67 00:09:21.587 lat (usec): min=151, max=250, avg=185.44, stdev=15.84 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:09:21.587 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:09:21.587 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:09:21.587 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 231], 00:09:21.587 | 99.99th=[ 231] 00:09:21.587 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:21.587 slat (nsec): min=13468, max=81312, avg=22508.54, stdev=7017.33 00:09:21.587 clat (usec): min=103, max=907, avg=139.30, stdev=20.87 00:09:21.587 lat (usec): min=122, max=925, avg=161.81, stdev=22.12 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:09:21.587 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:09:21.587 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 165], 00:09:21.587 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 245], 99.95th=[ 502], 00:09:21.587 | 99.99th=[ 906] 00:09:21.587 bw ( KiB/s): min=12288, max=12288, per=24.86%, avg=12288.00, stdev= 0.00, samples=1 00:09:21.587 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:21.587 lat (usec) : 250=99.95%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:21.587 cpu : usr=2.10%, sys=8.80%, ctx=5733, majf=0, minf=9 00:09:21.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 issued rwts: total=2657,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.587 job3: (groupid=0, jobs=1): err= 0: pid=63696: Mon Nov 18 18:07:39 2024 00:09:21.587 read: IOPS=2667, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:09:21.587 slat (nsec): min=11406, max=37325, avg=12865.54, stdev=2207.38 00:09:21.587 clat (usec): min=137, max=509, avg=172.84, stdev=16.62 00:09:21.587 lat (usec): min=149, max=522, avg=185.70, stdev=16.80 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:21.587 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:21.587 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:09:21.587 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 302], 99.95th=[ 408], 00:09:21.587 | 99.99th=[ 510] 00:09:21.587 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:21.587 slat (nsec): min=15051, max=70439, avg=20545.62, stdev=3947.61 00:09:21.587 clat (usec): min=106, max=1726, avg=140.74, stdev=34.26 00:09:21.587 lat (usec): min=125, max=1749, avg=161.29, stdev=34.60 00:09:21.587 clat percentiles (usec): 00:09:21.587 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:09:21.587 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:09:21.587 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:09:21.587 | 99.00th=[ 184], 99.50th=[ 241], 99.90th=[ 379], 99.95th=[ 412], 00:09:21.587 | 99.99th=[ 1729] 00:09:21.587 bw ( KiB/s): min=12288, max=12288, per=24.86%, avg=12288.00, stdev= 0.00, samples=1 00:09:21.587 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:21.587 lat (usec) : 250=99.69%, 500=0.28%, 750=0.02% 00:09:21.587 lat (msec) : 2=0.02% 00:09:21.587 cpu : usr=2.90%, sys=7.00%, ctx=5742, majf=0, minf=15 00:09:21.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.587 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.587 00:09:21.587 Run status group 0 (all jobs): 00:09:21.587 READ: bw=44.0MiB/s (46.1MB/s), 10.4MiB/s-12.0MiB/s (10.9MB/s-12.6MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:09:21.587 WRITE: bw=48.3MiB/s (50.6MB/s), 12.0MiB/s-12.3MiB/s (12.6MB/s-12.9MB/s), io=48.3MiB (50.7MB), run=1001-1001msec 00:09:21.587 00:09:21.587 Disk stats (read/write): 00:09:21.587 nvme0n1: ios=2540/2560, merge=0/0, ticks=446/357, in_queue=803, util=86.76% 00:09:21.587 nvme0n2: ios=2593/2786, merge=0/0, ticks=432/382, in_queue=814, util=88.02% 00:09:21.587 nvme0n3: ios=2327/2560, merge=0/0, ticks=410/382, in_queue=792, util=89.28% 00:09:21.587 nvme0n4: ios=2340/2560, merge=0/0, ticks=408/383, in_queue=791, util=89.75% 00:09:21.588 18:07:39 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:21.588 [global] 00:09:21.588 thread=1 00:09:21.588 invalidate=1 00:09:21.588 rw=write 00:09:21.588 time_based=1 00:09:21.588 runtime=1 00:09:21.588 ioengine=libaio 00:09:21.588 direct=1 00:09:21.588 bs=4096 00:09:21.588 iodepth=128 00:09:21.588 norandommap=0 00:09:21.588 numjobs=1 00:09:21.588 00:09:21.588 verify_dump=1 00:09:21.588 verify_backlog=512 00:09:21.588 verify_state_save=0 00:09:21.588 do_verify=1 00:09:21.588 verify=crc32c-intel 00:09:21.588 [job0] 00:09:21.588 filename=/dev/nvme0n1 00:09:21.588 [job1] 00:09:21.588 filename=/dev/nvme0n2 00:09:21.588 [job2] 00:09:21.588 filename=/dev/nvme0n3 00:09:21.588 [job3] 00:09:21.588 filename=/dev/nvme0n4 00:09:21.588 Could not set queue depth (nvme0n1) 00:09:21.588 Could not set queue depth (nvme0n2) 00:09:21.588 Could not set queue depth (nvme0n3) 00:09:21.588 Could not set queue depth (nvme0n4) 00:09:21.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.588 fio-3.35 00:09:21.588 Starting 4 threads 00:09:22.963 00:09:22.963 job0: (groupid=0, jobs=1): err= 0: pid=63759: Mon Nov 18 18:07:41 2024 00:09:22.963 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:22.963 slat (usec): min=5, max=4610, avg=82.46, stdev=398.21 00:09:22.963 clat (usec): min=6465, max=15804, avg=10902.25, stdev=1116.71 00:09:22.963 lat (usec): min=6491, max=16251, avg=10984.71, stdev=1141.96 00:09:22.963 clat percentiles (usec): 00:09:22.963 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10159], 00:09:22.963 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:09:22.963 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[12911], 00:09:22.963 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15795], 99.95th=[15795], 00:09:22.963 | 99.99th=[15795] 00:09:22.963 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1002msec); 0 zone resets 00:09:22.963 slat (usec): min=12, max=4506, avg=80.98, stdev=441.24 00:09:22.963 clat (usec): min=252, max=16442, avg=10735.90, stdev=1182.72 00:09:22.963 lat (usec): min=4078, max=16460, avg=10816.88, stdev=1248.60 00:09:22.963 clat percentiles (usec): 00:09:22.963 | 1.00th=[ 5473], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10290], 00:09:22.963 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:09:22.963 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[12387], 00:09:22.963 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15926], 99.95th=[15926], 00:09:22.963 | 99.99th=[16450] 00:09:22.963 bw ( KiB/s): min=23072, max=24625, per=35.50%, avg=23848.50, stdev=1098.14, samples=2 00:09:22.963 iops : min= 5768, max= 6156, avg=5962.00, stdev=274.36, samples=2 00:09:22.964 lat (usec) : 500=0.01% 00:09:22.964 lat (msec) : 10=13.48%, 20=86.51% 00:09:22.964 cpu : usr=5.09%, sys=15.08%, ctx=408, majf=0, minf=1 00:09:22.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.964 issued rwts: total=5632,6084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.964 job1: (groupid=0, jobs=1): err= 0: pid=63760: Mon Nov 18 18:07:41 2024 00:09:22.964 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:09:22.964 slat (usec): min=6, max=6141, avg=191.24, stdev=823.12 00:09:22.964 clat (usec): min=15125, max=44081, avg=24210.03, stdev=4772.96 00:09:22.964 lat (usec): min=15147, max=44349, avg=24401.26, stdev=4837.48 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[15926], 5.00th=[19006], 10.00th=[19530], 20.00th=[20055], 00:09:22.964 | 30.00th=[20055], 40.00th=[20579], 50.00th=[22938], 60.00th=[25822], 00:09:22.964 | 70.00th=[27919], 80.00th=[29230], 90.00th=[29492], 95.00th=[31851], 00:09:22.964 | 99.00th=[36439], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:09:22.964 | 99.99th=[44303] 00:09:22.964 write: IOPS=2480, BW=9920KiB/s (10.2MB/s)(9960KiB/1004msec); 0 zone resets 00:09:22.964 slat (usec): min=10, max=6597, avg=236.42, stdev=852.41 00:09:22.964 clat (usec): min=1887, max=61869, avg=31030.41, stdev=12397.18 00:09:22.964 lat (usec): min=8485, max=61895, avg=31266.83, stdev=12473.35 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[12387], 5.00th=[14353], 10.00th=[14746], 20.00th=[17695], 00:09:22.964 | 30.00th=[20317], 40.00th=[23462], 50.00th=[32375], 60.00th=[36439], 00:09:22.964 | 70.00th=[36963], 80.00th=[42730], 90.00th=[49021], 95.00th=[52167], 00:09:22.964 | 99.00th=[58983], 99.50th=[58983], 99.90th=[61604], 99.95th=[62129], 00:09:22.964 | 99.99th=[62129] 00:09:22.964 bw ( KiB/s): min= 7616, max=11280, per=14.06%, avg=9448.00, stdev=2590.84, samples=2 00:09:22.964 iops : min= 1904, max= 2820, avg=2362.00, stdev=647.71, samples=2 00:09:22.964 lat (msec) : 2=0.02%, 10=0.18%, 20=23.82%, 50=72.26%, 100=3.72% 00:09:22.964 cpu : usr=1.79%, sys=8.37%, ctx=320, majf=0, minf=3 00:09:22.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.964 issued rwts: total=2048,2490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.964 job2: (groupid=0, jobs=1): err= 0: pid=63761: Mon Nov 18 18:07:41 2024 00:09:22.964 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:09:22.964 slat (usec): min=7, max=10157, avg=166.77, stdev=884.17 00:09:22.964 clat (usec): min=12865, max=42720, avg=22046.42, stdev=5835.39 00:09:22.964 lat (usec): min=15739, max=42734, avg=22213.19, stdev=5812.34 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[14353], 5.00th=[16909], 10.00th=[17957], 20.00th=[18220], 00:09:22.964 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[21103], 00:09:22.964 | 70.00th=[24773], 80.00th=[26608], 90.00th=[27132], 95.00th=[37487], 00:09:22.964 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:22.964 | 99.99th=[42730] 00:09:22.964 write: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1002msec); 0 zone resets 00:09:22.964 slat (usec): min=11, max=9780, avg=145.87, stdev=705.01 00:09:22.964 clat (usec): min=560, max=30420, avg=18272.07, stdev=4024.21 00:09:22.964 lat (usec): min=3056, max=30448, avg=18417.95, stdev=3991.85 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[ 3949], 5.00th=[15139], 10.00th=[15401], 20.00th=[15664], 00:09:22.964 | 30.00th=[16057], 40.00th=[16450], 50.00th=[17695], 60.00th=[18744], 00:09:22.964 | 70.00th=[19530], 80.00th=[19792], 90.00th=[23987], 95.00th=[26346], 00:09:22.964 | 99.00th=[27919], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:09:22.964 | 99.99th=[30540] 00:09:22.964 bw ( KiB/s): min=12288, max=12312, per=18.31%, avg=12300.00, stdev=16.97, samples=2 00:09:22.964 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:22.964 lat (usec) : 750=0.02% 00:09:22.964 lat (msec) : 4=0.51%, 10=0.51%, 20=69.04%, 50=29.92% 00:09:22.964 cpu : usr=2.99%, sys=9.28%, ctx=197, majf=0, minf=11 00:09:22.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.964 issued rwts: total=3072,3169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.964 job3: (groupid=0, jobs=1): err= 0: pid=63762: Mon Nov 18 18:07:41 2024 00:09:22.964 read: IOPS=4988, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1001msec) 00:09:22.964 slat (usec): min=8, max=2918, avg=93.95, stdev=445.90 00:09:22.964 clat (usec): min=251, max=13508, avg=12427.30, stdev=1069.35 00:09:22.964 lat (usec): min=2905, max=13531, avg=12521.25, stdev=972.21 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[ 6521], 5.00th=[10945], 10.00th=[12125], 20.00th=[12256], 00:09:22.964 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:09:22.964 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[13042], 00:09:22.964 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13435], 99.95th=[13435], 00:09:22.964 | 99.99th=[13566] 00:09:22.964 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:22.964 slat (usec): min=11, max=2832, avg=96.04, stdev=408.15 00:09:22.964 clat (usec): min=9616, max=13496, avg=12572.84, stdev=517.47 00:09:22.964 lat (usec): min=10692, max=13526, avg=12668.89, stdev=319.94 00:09:22.964 clat percentiles (usec): 00:09:22.964 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:09:22.964 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:09:22.964 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[13173], 00:09:22.964 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13435], 99.95th=[13435], 00:09:22.964 | 99.99th=[13435] 00:09:22.964 bw ( KiB/s): min=20480, max=20521, per=30.51%, avg=20500.50, stdev=28.99, samples=2 00:09:22.964 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:22.964 lat (usec) : 500=0.01% 00:09:22.964 lat (msec) : 4=0.32%, 10=1.53%, 20=98.14% 00:09:22.964 cpu : usr=4.70%, sys=13.60%, ctx=320, majf=0, minf=2 00:09:22.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.964 issued rwts: total=4993,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.964 00:09:22.964 Run status group 0 (all jobs): 00:09:22.964 READ: bw=61.3MiB/s (64.2MB/s), 8159KiB/s-22.0MiB/s (8355kB/s-23.0MB/s), io=61.5MiB (64.5MB), run=1001-1004msec 00:09:22.964 WRITE: bw=65.6MiB/s (68.8MB/s), 9920KiB/s-23.7MiB/s (10.2MB/s-24.9MB/s), io=65.9MiB (69.1MB), run=1001-1004msec 00:09:22.964 00:09:22.964 Disk stats (read/write): 00:09:22.964 nvme0n1: ios=4992/5120, merge=0/0, ticks=25867/23183, in_queue=49050, util=88.38% 00:09:22.964 nvme0n2: ios=2088/2055, merge=0/0, ticks=16119/18603, in_queue=34722, util=89.09% 00:09:22.964 nvme0n3: ios=2560/2656, merge=0/0, ticks=13873/11433, in_queue=25306, util=88.91% 00:09:22.964 nvme0n4: ios=4128/4608, merge=0/0, ticks=11486/12458, in_queue=23944, util=89.74% 00:09:22.964 18:07:41 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:22.964 [global] 00:09:22.964 thread=1 00:09:22.964 invalidate=1 00:09:22.964 rw=randwrite 00:09:22.964 time_based=1 00:09:22.964 runtime=1 00:09:22.964 ioengine=libaio 00:09:22.964 direct=1 00:09:22.964 bs=4096 00:09:22.964 iodepth=128 00:09:22.964 norandommap=0 00:09:22.964 numjobs=1 00:09:22.964 00:09:22.964 verify_dump=1 00:09:22.965 verify_backlog=512 00:09:22.965 verify_state_save=0 00:09:22.965 do_verify=1 00:09:22.965 verify=crc32c-intel 00:09:22.965 [job0] 00:09:22.965 filename=/dev/nvme0n1 00:09:22.965 [job1] 00:09:22.965 filename=/dev/nvme0n2 00:09:22.965 [job2] 00:09:22.965 filename=/dev/nvme0n3 00:09:22.965 [job3] 00:09:22.965 filename=/dev/nvme0n4 00:09:22.965 Could not set queue depth (nvme0n1) 00:09:22.965 Could not set queue depth (nvme0n2) 00:09:22.965 Could not set queue depth (nvme0n3) 00:09:22.965 Could not set queue depth (nvme0n4) 00:09:22.965 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.965 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.965 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.965 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.965 fio-3.35 00:09:22.965 Starting 4 threads 00:09:24.340 00:09:24.340 job0: (groupid=0, jobs=1): err= 0: pid=63820: Mon Nov 18 18:07:42 2024 00:09:24.340 read: IOPS=1547, BW=6191KiB/s (6340kB/s)(6216KiB/1004msec) 00:09:24.340 slat (usec): min=8, max=16177, avg=216.43, stdev=1096.48 00:09:24.340 clat (usec): min=3501, max=72739, avg=27902.03, stdev=11648.64 00:09:24.340 lat (usec): min=5023, max=72758, avg=28118.46, stdev=11743.57 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[ 5932], 5.00th=[21627], 10.00th=[22152], 20.00th=[22414], 00:09:24.340 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:09:24.340 | 70.00th=[23987], 80.00th=[30278], 90.00th=[45876], 95.00th=[57934], 00:09:24.340 | 99.00th=[67634], 99.50th=[71828], 99.90th=[71828], 99.95th=[72877], 00:09:24.340 | 99.99th=[72877] 00:09:24.340 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:09:24.340 slat (usec): min=12, max=14570, avg=315.75, stdev=1324.24 00:09:24.340 clat (usec): min=6728, max=83983, avg=40531.70, stdev=18646.42 00:09:24.340 lat (usec): min=6752, max=84006, avg=40847.45, stdev=18748.81 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[11469], 5.00th=[16581], 10.00th=[20055], 20.00th=[27132], 00:09:24.340 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30802], 60.00th=[42730], 00:09:24.340 | 70.00th=[47449], 80.00th=[60556], 90.00th=[69731], 95.00th=[77071], 00:09:24.340 | 99.00th=[80217], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:09:24.340 | 99.99th=[84411] 00:09:24.340 bw ( KiB/s): min= 7336, max= 8192, per=11.54%, avg=7764.00, stdev=605.28, samples=2 00:09:24.340 iops : min= 1834, max= 2048, avg=1941.00, stdev=151.32, samples=2 00:09:24.340 lat (msec) : 4=0.03%, 10=0.67%, 20=6.41%, 50=72.49%, 100=20.41% 00:09:24.340 cpu : usr=1.79%, sys=6.08%, ctx=242, majf=0, minf=15 00:09:24.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:09:24.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.340 issued rwts: total=1554,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.340 job1: (groupid=0, jobs=1): err= 0: pid=63821: Mon Nov 18 18:07:42 2024 00:09:24.340 read: IOPS=5728, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1005msec) 00:09:24.340 slat (usec): min=8, max=4731, avg=78.27, stdev=484.50 00:09:24.340 clat (usec): min=702, max=18141, avg=10838.61, stdev=1383.93 00:09:24.340 lat (usec): min=4783, max=21380, avg=10916.87, stdev=1398.18 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[ 5473], 5.00th=[ 7439], 10.00th=[10159], 20.00th=[10552], 00:09:24.340 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:09:24.340 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:09:24.340 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:09:24.340 | 99.99th=[18220] 00:09:24.340 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:09:24.340 slat (usec): min=8, max=7813, avg=83.11, stdev=496.58 00:09:24.340 clat (usec): min=5617, max=15055, avg=10589.74, stdev=1091.88 00:09:24.340 lat (usec): min=6068, max=15263, avg=10672.85, stdev=1005.40 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[ 6259], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:09:24.340 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:09:24.340 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11338], 95.00th=[11469], 00:09:24.340 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15008], 99.95th=[15008], 00:09:24.340 | 99.99th=[15008] 00:09:24.340 bw ( KiB/s): min=24552, max=24576, per=36.52%, avg=24564.00, stdev=16.97, samples=2 00:09:24.340 iops : min= 6138, max= 6144, avg=6141.00, stdev= 4.24, samples=2 00:09:24.340 lat (usec) : 750=0.01% 00:09:24.340 lat (msec) : 10=12.79%, 20=87.20% 00:09:24.340 cpu : usr=4.78%, sys=14.44%, ctx=244, majf=0, minf=10 00:09:24.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:24.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.340 issued rwts: total=5757,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.340 job2: (groupid=0, jobs=1): err= 0: pid=63822: Mon Nov 18 18:07:42 2024 00:09:24.340 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1002msec) 00:09:24.340 slat (usec): min=8, max=3035, avg=94.21, stdev=447.13 00:09:24.340 clat (usec): min=268, max=15166, avg=12427.94, stdev=1158.35 00:09:24.340 lat (usec): min=2631, max=15180, avg=12522.15, stdev=1069.15 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[ 5997], 5.00th=[11863], 10.00th=[12125], 20.00th=[12387], 00:09:24.340 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:09:24.340 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:09:24.340 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15139], 99.95th=[15139], 00:09:24.340 | 99.99th=[15139] 00:09:24.340 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:24.340 slat (usec): min=10, max=4679, avg=96.52, stdev=418.34 00:09:24.340 clat (usec): min=9410, max=15343, avg=12653.50, stdev=563.58 00:09:24.340 lat (usec): min=10892, max=15371, avg=12750.02, stdev=382.52 00:09:24.340 clat percentiles (usec): 00:09:24.340 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:09:24.340 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:09:24.340 | 70.00th=[12911], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:09:24.340 | 99.00th=[13566], 99.50th=[15008], 99.90th=[15270], 99.95th=[15270], 00:09:24.340 | 99.99th=[15401] 00:09:24.340 bw ( KiB/s): min=20480, max=20521, per=30.48%, avg=20500.50, stdev=28.99, samples=2 00:09:24.340 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:24.340 lat (usec) : 500=0.01% 00:09:24.340 lat (msec) : 4=0.32%, 10=1.53%, 20=98.15% 00:09:24.340 cpu : usr=4.50%, sys=13.29%, ctx=317, majf=0, minf=7 00:09:24.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:24.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.341 issued rwts: total=4961,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.341 job3: (groupid=0, jobs=1): err= 0: pid=63823: Mon Nov 18 18:07:42 2024 00:09:24.341 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:24.341 slat (usec): min=7, max=20820, avg=143.03, stdev=974.10 00:09:24.341 clat (usec): min=4814, max=40034, avg=19845.28, stdev=4469.38 00:09:24.341 lat (usec): min=4826, max=46486, avg=19988.30, stdev=4519.27 00:09:24.341 clat percentiles (usec): 00:09:24.341 | 1.00th=[10945], 5.00th=[14746], 10.00th=[16057], 20.00th=[16581], 00:09:24.341 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17695], 60.00th=[22414], 00:09:24.341 | 70.00th=[23200], 80.00th=[23462], 90.00th=[24249], 95.00th=[28705], 00:09:24.341 | 99.00th=[31065], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:09:24.341 | 99.99th=[40109] 00:09:24.341 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:24.341 slat (usec): min=6, max=16815, avg=127.65, stdev=822.36 00:09:24.341 clat (usec): min=3244, max=30939, avg=15627.61, stdev=3325.94 00:09:24.341 lat (usec): min=3270, max=30985, avg=15755.26, stdev=3263.35 00:09:24.341 clat percentiles (usec): 00:09:24.341 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[12649], 20.00th=[13173], 00:09:24.341 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[15270], 00:09:24.341 | 70.00th=[16581], 80.00th=[19268], 90.00th=[20317], 95.00th=[20579], 00:09:24.341 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28443], 99.95th=[28705], 00:09:24.341 | 99.99th=[31065] 00:09:24.341 bw ( KiB/s): min=12296, max=16408, per=21.34%, avg=14352.00, stdev=2907.62, samples=2 00:09:24.341 iops : min= 3074, max= 4102, avg=3588.00, stdev=726.91, samples=2 00:09:24.341 lat (msec) : 4=0.04%, 10=0.59%, 20=72.21%, 50=27.16% 00:09:24.341 cpu : usr=3.29%, sys=9.87%, ctx=157, majf=0, minf=11 00:09:24.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:24.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.341 issued rwts: total=3584,3588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.341 00:09:24.341 Run status group 0 (all jobs): 00:09:24.341 READ: bw=61.6MiB/s (64.6MB/s), 6191KiB/s-22.4MiB/s (6340kB/s-23.5MB/s), io=61.9MiB (64.9MB), run=1002-1005msec 00:09:24.341 WRITE: bw=65.7MiB/s (68.9MB/s), 8159KiB/s-23.9MiB/s (8355kB/s-25.0MB/s), io=66.0MiB (69.2MB), run=1002-1005msec 00:09:24.341 00:09:24.341 Disk stats (read/write): 00:09:24.341 nvme0n1: ios=1585/1695, merge=0/0, ticks=13795/20436, in_queue=34231, util=87.94% 00:09:24.341 nvme0n2: ios=4967/5120, merge=0/0, ticks=50354/50062, in_queue=100416, util=87.92% 00:09:24.341 nvme0n3: ios=4096/4512, merge=0/0, ticks=11339/12378, in_queue=23717, util=89.20% 00:09:24.341 nvme0n4: ios=2812/3072, merge=0/0, ticks=56016/46083, in_queue=102099, util=89.67% 00:09:24.341 18:07:42 -- target/fio.sh@55 -- # sync 00:09:24.341 18:07:42 -- target/fio.sh@59 -- # fio_pid=63837 00:09:24.341 18:07:42 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:24.341 18:07:42 -- target/fio.sh@61 -- # sleep 3 00:09:24.341 [global] 00:09:24.341 thread=1 00:09:24.341 invalidate=1 00:09:24.341 rw=read 00:09:24.341 time_based=1 00:09:24.341 runtime=10 00:09:24.341 ioengine=libaio 00:09:24.341 direct=1 00:09:24.341 bs=4096 00:09:24.341 iodepth=1 00:09:24.341 norandommap=1 00:09:24.341 numjobs=1 00:09:24.341 00:09:24.341 [job0] 00:09:24.341 filename=/dev/nvme0n1 00:09:24.341 [job1] 00:09:24.341 filename=/dev/nvme0n2 00:09:24.341 [job2] 00:09:24.341 filename=/dev/nvme0n3 00:09:24.341 [job3] 00:09:24.341 filename=/dev/nvme0n4 00:09:24.341 Could not set queue depth (nvme0n1) 00:09:24.341 Could not set queue depth (nvme0n2) 00:09:24.341 Could not set queue depth (nvme0n3) 00:09:24.341 Could not set queue depth (nvme0n4) 00:09:24.341 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.341 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.341 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.341 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.341 fio-3.35 00:09:24.341 Starting 4 threads 00:09:27.626 18:07:45 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:27.626 fio: pid=63880, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.627 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35635200, buflen=4096 00:09:27.627 18:07:45 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:27.627 fio: pid=63879, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.627 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42180608, buflen=4096 00:09:27.627 18:07:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.627 18:07:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:27.886 fio: pid=63877, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:27.886 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43180032, buflen=4096 00:09:27.886 18:07:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:27.886 18:07:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:28.145 fio: pid=63878, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:28.145 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=18051072, buflen=4096 00:09:28.145 18:07:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.145 18:07:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:28.145 00:09:28.145 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63877: Mon Nov 18 18:07:46 2024 00:09:28.145 read: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(41.2MiB/3517msec) 00:09:28.145 slat (usec): min=7, max=8256, avg=22.88, stdev=147.22 00:09:28.145 clat (usec): min=125, max=3644, avg=308.86, stdev=65.34 00:09:28.145 lat (usec): min=139, max=8457, avg=331.74, stdev=161.19 00:09:28.145 clat percentiles (usec): 00:09:28.145 | 1.00th=[ 169], 5.00th=[ 217], 10.00th=[ 231], 20.00th=[ 253], 00:09:28.145 | 30.00th=[ 289], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:09:28.145 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 363], 00:09:28.145 | 99.00th=[ 396], 99.50th=[ 469], 99.90th=[ 603], 99.95th=[ 1074], 00:09:28.145 | 99.99th=[ 1795] 00:09:28.145 bw ( KiB/s): min=10728, max=14840, per=21.68%, avg=11565.00, stdev=1609.16, samples=6 00:09:28.145 iops : min= 2682, max= 3710, avg=2891.17, stdev=402.33, samples=6 00:09:28.145 lat (usec) : 250=18.52%, 500=81.33%, 750=0.06%, 1000=0.02% 00:09:28.145 lat (msec) : 2=0.05%, 4=0.01% 00:09:28.145 cpu : usr=1.19%, sys=5.26%, ctx=10559, majf=0, minf=1 00:09:28.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.145 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.145 issued rwts: total=10543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.145 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63878: Mon Nov 18 18:07:46 2024 00:09:28.145 read: IOPS=5509, BW=21.5MiB/s (22.6MB/s)(81.2MiB/3774msec) 00:09:28.145 slat (usec): min=9, max=12501, avg=15.07, stdev=154.30 00:09:28.145 clat (usec): min=99, max=8063, avg=165.17, stdev=74.51 00:09:28.145 lat (usec): min=136, max=12751, avg=180.24, stdev=173.03 00:09:28.145 clat percentiles (usec): 00:09:28.145 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:09:28.145 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:09:28.146 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 223], 00:09:28.146 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 388], 99.95th=[ 1123], 00:09:28.146 | 99.99th=[ 2573] 00:09:28.146 bw ( KiB/s): min=14861, max=24032, per=41.95%, avg=22376.71, stdev=3392.77, samples=7 00:09:28.146 iops : min= 3715, max= 6008, avg=5594.14, stdev=848.28, samples=7 00:09:28.146 lat (usec) : 100=0.01%, 250=96.24%, 500=3.66%, 750=0.02%, 1000=0.01% 00:09:28.146 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:09:28.146 cpu : usr=1.59%, sys=6.25%, ctx=20813, majf=0, minf=1 00:09:28.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 issued rwts: total=20792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.146 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63879: Mon Nov 18 18:07:46 2024 00:09:28.146 read: IOPS=3140, BW=12.3MiB/s (12.9MB/s)(40.2MiB/3279msec) 00:09:28.146 slat (usec): min=8, max=9032, avg=17.60, stdev=108.90 00:09:28.146 clat (usec): min=138, max=7487, avg=298.96, stdev=111.97 00:09:28.146 lat (usec): min=153, max=9450, avg=316.55, stdev=157.43 00:09:28.146 clat percentiles (usec): 00:09:28.146 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 186], 00:09:28.146 | 30.00th=[ 289], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 343], 00:09:28.146 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 367], 00:09:28.146 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 486], 99.95th=[ 979], 00:09:28.146 | 99.99th=[ 3458] 00:09:28.146 bw ( KiB/s): min=10984, max=20208, per=23.62%, avg=12600.67, stdev=3728.38, samples=6 00:09:28.146 iops : min= 2746, max= 5052, avg=3150.17, stdev=932.10, samples=6 00:09:28.146 lat (usec) : 250=26.61%, 500=73.28%, 750=0.02%, 1000=0.03% 00:09:28.146 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:09:28.146 cpu : usr=1.16%, sys=4.94%, ctx=10302, majf=0, minf=2 00:09:28.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 issued rwts: total=10299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.146 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63880: Mon Nov 18 18:07:46 2024 00:09:28.146 read: IOPS=2908, BW=11.4MiB/s (11.9MB/s)(34.0MiB/2992msec) 00:09:28.146 slat (nsec): min=9200, max=65970, avg=15688.22, stdev=5110.47 00:09:28.146 clat (usec): min=160, max=3373, avg=326.61, stdev=52.92 00:09:28.146 lat (usec): min=172, max=3384, avg=342.30, stdev=53.70 00:09:28.146 clat percentiles (usec): 00:09:28.146 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 273], 00:09:28.146 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:09:28.146 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 363], 95.00th=[ 371], 00:09:28.146 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 441], 00:09:28.146 | 99.99th=[ 3359] 00:09:28.146 bw ( KiB/s): min=10984, max=14848, per=22.11%, avg=11793.00, stdev=1709.04, samples=5 00:09:28.146 iops : min= 2746, max= 3712, avg=2948.20, stdev=427.28, samples=5 00:09:28.146 lat (usec) : 250=8.49%, 500=91.45%, 750=0.03% 00:09:28.146 lat (msec) : 4=0.01% 00:09:28.146 cpu : usr=0.80%, sys=4.68%, ctx=8702, majf=0, minf=2 00:09:28.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:28.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.146 issued rwts: total=8701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:28.146 00:09:28.146 Run status group 0 (all jobs): 00:09:28.146 READ: bw=52.1MiB/s (54.6MB/s), 11.4MiB/s-21.5MiB/s (11.9MB/s-22.6MB/s), io=197MiB (206MB), run=2992-3774msec 00:09:28.146 00:09:28.146 Disk stats (read/write): 00:09:28.146 nvme0n1: ios=9928/0, merge=0/0, ticks=3120/0, in_queue=3120, util=95.68% 00:09:28.146 nvme0n2: ios=20024/0, merge=0/0, ticks=3338/0, in_queue=3338, util=95.58% 00:09:28.146 nvme0n3: ios=9806/0, merge=0/0, ticks=2837/0, in_queue=2837, util=96.28% 00:09:28.146 nvme0n4: ios=8374/0, merge=0/0, ticks=2653/0, in_queue=2653, util=96.77% 00:09:28.405 18:07:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.405 18:07:46 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:28.664 18:07:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.664 18:07:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:28.923 18:07:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:28.923 18:07:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:29.182 18:07:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:29.182 18:07:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:29.441 18:07:47 -- target/fio.sh@69 -- # fio_status=0 00:09:29.441 18:07:47 -- target/fio.sh@70 -- # wait 63837 00:09:29.441 18:07:47 -- target/fio.sh@70 -- # fio_status=4 00:09:29.441 18:07:47 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.441 18:07:47 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.441 18:07:47 -- common/autotest_common.sh@1208 -- # local i=0 00:09:29.441 18:07:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:29.441 18:07:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.441 18:07:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:29.441 18:07:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.441 18:07:48 -- common/autotest_common.sh@1220 -- # return 0 00:09:29.441 18:07:48 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:29.441 18:07:48 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:29.441 nvmf hotplug test: fio failed as expected 00:09:29.441 18:07:48 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.701 18:07:48 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:29.701 18:07:48 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:29.701 18:07:48 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:29.701 18:07:48 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:29.701 18:07:48 -- target/fio.sh@91 -- # nvmftestfini 00:09:29.701 18:07:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:29.701 18:07:48 -- nvmf/common.sh@116 -- # sync 00:09:29.701 18:07:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:29.701 18:07:48 -- nvmf/common.sh@119 -- # set +e 00:09:29.701 18:07:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:29.701 18:07:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:29.701 rmmod nvme_tcp 00:09:29.701 rmmod nvme_fabrics 00:09:29.701 rmmod nvme_keyring 00:09:29.701 18:07:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:29.701 18:07:48 -- nvmf/common.sh@123 -- # set -e 00:09:29.701 18:07:48 -- nvmf/common.sh@124 -- # return 0 00:09:29.701 18:07:48 -- nvmf/common.sh@477 -- # '[' -n 63453 ']' 00:09:29.701 18:07:48 -- nvmf/common.sh@478 -- # killprocess 63453 00:09:29.701 18:07:48 -- common/autotest_common.sh@936 -- # '[' -z 63453 ']' 00:09:29.701 18:07:48 -- common/autotest_common.sh@940 -- # kill -0 63453 00:09:29.960 18:07:48 -- common/autotest_common.sh@941 -- # uname 00:09:29.960 18:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:29.960 18:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63453 00:09:29.960 18:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:29.960 18:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:29.960 18:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63453' 00:09:29.960 killing process with pid 63453 00:09:29.960 18:07:48 -- common/autotest_common.sh@955 -- # kill 63453 00:09:29.960 18:07:48 -- common/autotest_common.sh@960 -- # wait 63453 00:09:29.960 18:07:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:29.960 18:07:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:29.960 18:07:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:29.960 18:07:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.960 18:07:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:29.960 18:07:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.960 18:07:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.960 18:07:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.219 18:07:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:30.219 ************************************ 00:09:30.219 END TEST nvmf_fio_target 00:09:30.219 ************************************ 00:09:30.219 00:09:30.219 real 0m19.349s 00:09:30.219 user 1m12.735s 00:09:30.219 sys 0m10.100s 00:09:30.219 18:07:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.219 18:07:48 -- common/autotest_common.sh@10 -- # set +x 00:09:30.219 18:07:48 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.219 18:07:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:30.219 18:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.219 18:07:48 -- common/autotest_common.sh@10 -- # set +x 00:09:30.219 ************************************ 00:09:30.219 START TEST nvmf_bdevio 00:09:30.219 ************************************ 00:09:30.220 18:07:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.220 * Looking for test storage... 00:09:30.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.220 18:07:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:30.220 18:07:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:30.220 18:07:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:30.220 18:07:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:30.220 18:07:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:30.220 18:07:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:30.220 18:07:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:30.220 18:07:48 -- scripts/common.sh@335 -- # IFS=.-: 00:09:30.220 18:07:48 -- scripts/common.sh@335 -- # read -ra ver1 00:09:30.220 18:07:48 -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.220 18:07:48 -- scripts/common.sh@336 -- # read -ra ver2 00:09:30.220 18:07:48 -- scripts/common.sh@337 -- # local 'op=<' 00:09:30.220 18:07:48 -- scripts/common.sh@339 -- # ver1_l=2 00:09:30.220 18:07:48 -- scripts/common.sh@340 -- # ver2_l=1 00:09:30.220 18:07:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:30.220 18:07:48 -- scripts/common.sh@343 -- # case "$op" in 00:09:30.220 18:07:48 -- scripts/common.sh@344 -- # : 1 00:09:30.220 18:07:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:30.220 18:07:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.220 18:07:48 -- scripts/common.sh@364 -- # decimal 1 00:09:30.220 18:07:48 -- scripts/common.sh@352 -- # local d=1 00:09:30.220 18:07:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.220 18:07:48 -- scripts/common.sh@354 -- # echo 1 00:09:30.220 18:07:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:30.220 18:07:48 -- scripts/common.sh@365 -- # decimal 2 00:09:30.220 18:07:48 -- scripts/common.sh@352 -- # local d=2 00:09:30.220 18:07:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.220 18:07:48 -- scripts/common.sh@354 -- # echo 2 00:09:30.220 18:07:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:30.220 18:07:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:30.220 18:07:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:30.220 18:07:48 -- scripts/common.sh@367 -- # return 0 00:09:30.220 18:07:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.220 18:07:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.220 --rc genhtml_branch_coverage=1 00:09:30.220 --rc genhtml_function_coverage=1 00:09:30.220 --rc genhtml_legend=1 00:09:30.220 --rc geninfo_all_blocks=1 00:09:30.220 --rc geninfo_unexecuted_blocks=1 00:09:30.220 00:09:30.220 ' 00:09:30.220 18:07:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.220 --rc genhtml_branch_coverage=1 00:09:30.220 --rc genhtml_function_coverage=1 00:09:30.220 --rc genhtml_legend=1 00:09:30.220 --rc geninfo_all_blocks=1 00:09:30.220 --rc geninfo_unexecuted_blocks=1 00:09:30.220 00:09:30.220 ' 00:09:30.220 18:07:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.220 --rc genhtml_branch_coverage=1 00:09:30.220 --rc genhtml_function_coverage=1 00:09:30.220 --rc genhtml_legend=1 00:09:30.220 --rc geninfo_all_blocks=1 00:09:30.220 --rc geninfo_unexecuted_blocks=1 00:09:30.220 00:09:30.220 ' 00:09:30.220 18:07:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.220 --rc genhtml_branch_coverage=1 00:09:30.220 --rc genhtml_function_coverage=1 00:09:30.220 --rc genhtml_legend=1 00:09:30.220 --rc geninfo_all_blocks=1 00:09:30.220 --rc geninfo_unexecuted_blocks=1 00:09:30.220 00:09:30.220 ' 00:09:30.220 18:07:48 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.220 18:07:48 -- nvmf/common.sh@7 -- # uname -s 00:09:30.220 18:07:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.220 18:07:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.220 18:07:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.220 18:07:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.220 18:07:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.220 18:07:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.220 18:07:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.220 18:07:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.220 18:07:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.220 18:07:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.220 18:07:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:30.220 18:07:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:30.220 18:07:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.220 18:07:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.220 18:07:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.220 18:07:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.220 18:07:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.220 18:07:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.220 18:07:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.220 18:07:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.220 18:07:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.220 18:07:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.220 18:07:48 -- paths/export.sh@5 -- # export PATH 00:09:30.220 18:07:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.220 18:07:48 -- nvmf/common.sh@46 -- # : 0 00:09:30.220 18:07:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:30.220 18:07:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:30.220 18:07:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:30.220 18:07:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.220 18:07:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.220 18:07:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:30.220 18:07:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:30.220 18:07:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:30.220 18:07:48 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.220 18:07:48 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.220 18:07:48 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:30.220 18:07:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:30.220 18:07:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.220 18:07:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:30.220 18:07:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:30.220 18:07:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:30.220 18:07:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.220 18:07:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.220 18:07:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.480 18:07:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:30.480 18:07:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:30.480 18:07:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:30.480 18:07:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:30.480 18:07:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:30.480 18:07:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:30.480 18:07:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.480 18:07:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.480 18:07:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:30.480 18:07:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:30.480 18:07:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.480 18:07:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.480 18:07:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.480 18:07:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.480 18:07:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.480 18:07:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.480 18:07:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.480 18:07:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.480 18:07:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:30.480 18:07:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:30.480 Cannot find device "nvmf_tgt_br" 00:09:30.480 18:07:48 -- nvmf/common.sh@154 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.480 Cannot find device "nvmf_tgt_br2" 00:09:30.480 18:07:48 -- nvmf/common.sh@155 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:30.480 18:07:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:30.480 Cannot find device "nvmf_tgt_br" 00:09:30.480 18:07:48 -- nvmf/common.sh@157 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:30.480 Cannot find device "nvmf_tgt_br2" 00:09:30.480 18:07:48 -- nvmf/common.sh@158 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:30.480 18:07:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:30.480 18:07:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.480 18:07:48 -- nvmf/common.sh@161 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.480 18:07:48 -- nvmf/common.sh@162 -- # true 00:09:30.480 18:07:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.480 18:07:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.480 18:07:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.480 18:07:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.480 18:07:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.480 18:07:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.480 18:07:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.480 18:07:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:30.480 18:07:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:30.480 18:07:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:30.480 18:07:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:30.480 18:07:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:30.480 18:07:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:30.480 18:07:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.480 18:07:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.480 18:07:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.480 18:07:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:30.739 18:07:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:30.739 18:07:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.739 18:07:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.739 18:07:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.739 18:07:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.739 18:07:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.739 18:07:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:30.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:09:30.739 00:09:30.739 --- 10.0.0.2 ping statistics --- 00:09:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.739 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:30.739 18:07:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:30.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:30.739 00:09:30.739 --- 10.0.0.3 ping statistics --- 00:09:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.739 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:30.739 18:07:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:30.739 00:09:30.739 --- 10.0.0.1 ping statistics --- 00:09:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.739 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:30.739 18:07:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.739 18:07:49 -- nvmf/common.sh@421 -- # return 0 00:09:30.739 18:07:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:30.739 18:07:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.740 18:07:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:30.740 18:07:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:30.740 18:07:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.740 18:07:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:30.740 18:07:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:30.740 18:07:49 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:30.740 18:07:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:30.740 18:07:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.740 18:07:49 -- common/autotest_common.sh@10 -- # set +x 00:09:30.740 18:07:49 -- nvmf/common.sh@469 -- # nvmfpid=64158 00:09:30.740 18:07:49 -- nvmf/common.sh@470 -- # waitforlisten 64158 00:09:30.740 18:07:49 -- common/autotest_common.sh@829 -- # '[' -z 64158 ']' 00:09:30.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.740 18:07:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:30.740 18:07:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.740 18:07:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.740 18:07:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.740 18:07:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.740 18:07:49 -- common/autotest_common.sh@10 -- # set +x 00:09:30.740 [2024-11-18 18:07:49.224727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:30.740 [2024-11-18 18:07:49.224807] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.999 [2024-11-18 18:07:49.366464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.999 [2024-11-18 18:07:49.434816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:30.999 [2024-11-18 18:07:49.435451] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.999 [2024-11-18 18:07:49.435806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.999 [2024-11-18 18:07:49.436327] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.999 [2024-11-18 18:07:49.436708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:30.999 [2024-11-18 18:07:49.436792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:30.999 [2024-11-18 18:07:49.436907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:30.999 [2024-11-18 18:07:49.436916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.566 18:07:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.566 18:07:50 -- common/autotest_common.sh@862 -- # return 0 00:09:31.566 18:07:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:31.566 18:07:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.566 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 18:07:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.825 18:07:50 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.825 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.825 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 [2024-11-18 18:07:50.196255] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.825 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.825 18:07:50 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.825 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.825 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 Malloc0 00:09:31.825 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.825 18:07:50 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.825 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.825 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.825 18:07:50 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.825 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.825 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.825 18:07:50 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.825 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.825 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.825 [2024-11-18 18:07:50.258798] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.825 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.825 18:07:50 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:31.825 18:07:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:31.825 18:07:50 -- nvmf/common.sh@520 -- # config=() 00:09:31.825 18:07:50 -- nvmf/common.sh@520 -- # local subsystem config 00:09:31.825 18:07:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:31.825 18:07:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:31.825 { 00:09:31.825 "params": { 00:09:31.825 "name": "Nvme$subsystem", 00:09:31.825 "trtype": "$TEST_TRANSPORT", 00:09:31.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.825 "adrfam": "ipv4", 00:09:31.825 "trsvcid": "$NVMF_PORT", 00:09:31.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.825 "hdgst": ${hdgst:-false}, 00:09:31.825 "ddgst": ${ddgst:-false} 00:09:31.825 }, 00:09:31.825 "method": "bdev_nvme_attach_controller" 00:09:31.825 } 00:09:31.825 EOF 00:09:31.825 )") 00:09:31.825 18:07:50 -- nvmf/common.sh@542 -- # cat 00:09:31.825 18:07:50 -- nvmf/common.sh@544 -- # jq . 00:09:31.825 18:07:50 -- nvmf/common.sh@545 -- # IFS=, 00:09:31.825 18:07:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:31.825 "params": { 00:09:31.825 "name": "Nvme1", 00:09:31.825 "trtype": "tcp", 00:09:31.825 "traddr": "10.0.0.2", 00:09:31.825 "adrfam": "ipv4", 00:09:31.825 "trsvcid": "4420", 00:09:31.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.825 "hdgst": false, 00:09:31.825 "ddgst": false 00:09:31.825 }, 00:09:31.825 "method": "bdev_nvme_attach_controller" 00:09:31.825 }' 00:09:31.825 [2024-11-18 18:07:50.313881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.825 [2024-11-18 18:07:50.313983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64194 ] 00:09:32.085 [2024-11-18 18:07:50.455356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.085 [2024-11-18 18:07:50.527477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.085 [2024-11-18 18:07:50.527641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.085 [2024-11-18 18:07:50.527650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.085 [2024-11-18 18:07:50.664681] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:32.085 [2024-11-18 18:07:50.664964] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:32.085 I/O targets: 00:09:32.085 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:32.085 00:09:32.085 00:09:32.085 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.085 http://cunit.sourceforge.net/ 00:09:32.085 00:09:32.085 00:09:32.085 Suite: bdevio tests on: Nvme1n1 00:09:32.085 Test: blockdev write read block ...passed 00:09:32.085 Test: blockdev write zeroes read block ...passed 00:09:32.085 Test: blockdev write zeroes read no split ...passed 00:09:32.344 Test: blockdev write zeroes read split ...passed 00:09:32.344 Test: blockdev write zeroes read split partial ...passed 00:09:32.344 Test: blockdev reset ...[2024-11-18 18:07:50.698089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:32.344 [2024-11-18 18:07:50.698183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc80 (9): Bad file descriptor 00:09:32.344 passed 00:09:32.344 Test: blockdev write read 8 blocks ...[2024-11-18 18:07:50.715059] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:32.344 passed 00:09:32.344 Test: blockdev write read size > 128k ...passed 00:09:32.344 Test: blockdev write read invalid size ...passed 00:09:32.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.344 Test: blockdev write read max offset ...passed 00:09:32.344 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.344 Test: blockdev writev readv 8 blocks ...passed 00:09:32.344 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.344 Test: blockdev writev readv block ...passed 00:09:32.344 Test: blockdev writev readv size > 128k ...passed 00:09:32.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.344 Test: blockdev comparev and writev ...[2024-11-18 18:07:50.723838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGpassed 00:09:32.344 Test: blockdev nvme passthru rw ...passed 00:09:32.344 Test: blockdev nvme passthru vendor specific ...passed 00:09:32.344 Test: blockdev nvme admin passthru ...L DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.724097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.724470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.724807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.724850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.724862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.725176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.725196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.725215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.344 [2024-11-18 18:07:50.725226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.726082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.344 [2024-11-18 18:07:50.726106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.726228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.344 [2024-11-18 18:07:50.726247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.726371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.344 [2024-11-18 18:07:50.726390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:32.344 [2024-11-18 18:07:50.726505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.344 [2024-11-18 18:07:50.726523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:32.345 passed 00:09:32.345 Test: blockdev copy ...passed 00:09:32.345 00:09:32.345 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.345 suites 1 1 n/a 0 0 00:09:32.345 tests 23 23 23 0 0 00:09:32.345 asserts 152 152 152 0 n/a 00:09:32.345 00:09:32.345 Elapsed time = 0.148 seconds 00:09:32.345 18:07:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.345 18:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.345 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:09:32.345 18:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.345 18:07:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:32.345 18:07:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:32.345 18:07:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:32.345 18:07:50 -- nvmf/common.sh@116 -- # sync 00:09:32.604 18:07:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:32.604 18:07:50 -- nvmf/common.sh@119 -- # set +e 00:09:32.604 18:07:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:32.604 18:07:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:32.604 rmmod nvme_tcp 00:09:32.604 rmmod nvme_fabrics 00:09:32.604 rmmod nvme_keyring 00:09:32.604 18:07:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:32.604 18:07:51 -- nvmf/common.sh@123 -- # set -e 00:09:32.604 18:07:51 -- nvmf/common.sh@124 -- # return 0 00:09:32.604 18:07:51 -- nvmf/common.sh@477 -- # '[' -n 64158 ']' 00:09:32.604 18:07:51 -- nvmf/common.sh@478 -- # killprocess 64158 00:09:32.604 18:07:51 -- common/autotest_common.sh@936 -- # '[' -z 64158 ']' 00:09:32.604 18:07:51 -- common/autotest_common.sh@940 -- # kill -0 64158 00:09:32.604 18:07:51 -- common/autotest_common.sh@941 -- # uname 00:09:32.604 18:07:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:32.604 18:07:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64158 00:09:32.604 18:07:51 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:32.604 18:07:51 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:32.604 killing process with pid 64158 00:09:32.604 18:07:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64158' 00:09:32.604 18:07:51 -- common/autotest_common.sh@955 -- # kill 64158 00:09:32.604 18:07:51 -- common/autotest_common.sh@960 -- # wait 64158 00:09:32.864 18:07:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:32.864 18:07:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:32.864 18:07:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:32.864 18:07:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.864 18:07:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:32.864 18:07:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.864 18:07:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.864 18:07:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.864 18:07:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:32.864 00:09:32.864 real 0m2.658s 00:09:32.864 user 0m8.355s 00:09:32.864 sys 0m0.653s 00:09:32.864 18:07:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.864 18:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.864 ************************************ 00:09:32.864 END TEST nvmf_bdevio 00:09:32.864 ************************************ 00:09:32.864 18:07:51 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:09:32.864 18:07:51 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:32.864 18:07:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:32.864 18:07:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.864 18:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.864 ************************************ 00:09:32.864 START TEST nvmf_bdevio_no_huge 00:09:32.864 ************************************ 00:09:32.864 18:07:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:09:32.864 * Looking for test storage... 00:09:32.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.864 18:07:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:32.864 18:07:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:32.864 18:07:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:33.124 18:07:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:33.124 18:07:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:33.124 18:07:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:33.124 18:07:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:33.124 18:07:51 -- scripts/common.sh@335 -- # IFS=.-: 00:09:33.124 18:07:51 -- scripts/common.sh@335 -- # read -ra ver1 00:09:33.124 18:07:51 -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.124 18:07:51 -- scripts/common.sh@336 -- # read -ra ver2 00:09:33.124 18:07:51 -- scripts/common.sh@337 -- # local 'op=<' 00:09:33.124 18:07:51 -- scripts/common.sh@339 -- # ver1_l=2 00:09:33.124 18:07:51 -- scripts/common.sh@340 -- # ver2_l=1 00:09:33.124 18:07:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:33.124 18:07:51 -- scripts/common.sh@343 -- # case "$op" in 00:09:33.124 18:07:51 -- scripts/common.sh@344 -- # : 1 00:09:33.124 18:07:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:33.124 18:07:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.124 18:07:51 -- scripts/common.sh@364 -- # decimal 1 00:09:33.124 18:07:51 -- scripts/common.sh@352 -- # local d=1 00:09:33.124 18:07:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.124 18:07:51 -- scripts/common.sh@354 -- # echo 1 00:09:33.124 18:07:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:33.124 18:07:51 -- scripts/common.sh@365 -- # decimal 2 00:09:33.124 18:07:51 -- scripts/common.sh@352 -- # local d=2 00:09:33.124 18:07:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.124 18:07:51 -- scripts/common.sh@354 -- # echo 2 00:09:33.124 18:07:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:33.124 18:07:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:33.124 18:07:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:33.124 18:07:51 -- scripts/common.sh@367 -- # return 0 00:09:33.124 18:07:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.124 18:07:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.124 --rc genhtml_branch_coverage=1 00:09:33.124 --rc genhtml_function_coverage=1 00:09:33.124 --rc genhtml_legend=1 00:09:33.124 --rc geninfo_all_blocks=1 00:09:33.124 --rc geninfo_unexecuted_blocks=1 00:09:33.124 00:09:33.124 ' 00:09:33.124 18:07:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.124 --rc genhtml_branch_coverage=1 00:09:33.124 --rc genhtml_function_coverage=1 00:09:33.124 --rc genhtml_legend=1 00:09:33.124 --rc geninfo_all_blocks=1 00:09:33.124 --rc geninfo_unexecuted_blocks=1 00:09:33.124 00:09:33.124 ' 00:09:33.124 18:07:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.124 --rc genhtml_branch_coverage=1 00:09:33.124 --rc genhtml_function_coverage=1 00:09:33.124 --rc genhtml_legend=1 00:09:33.124 --rc geninfo_all_blocks=1 00:09:33.124 --rc geninfo_unexecuted_blocks=1 00:09:33.124 00:09:33.124 ' 00:09:33.124 18:07:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:33.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.124 --rc genhtml_branch_coverage=1 00:09:33.124 --rc genhtml_function_coverage=1 00:09:33.124 --rc genhtml_legend=1 00:09:33.124 --rc geninfo_all_blocks=1 00:09:33.124 --rc geninfo_unexecuted_blocks=1 00:09:33.124 00:09:33.124 ' 00:09:33.124 18:07:51 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.124 18:07:51 -- nvmf/common.sh@7 -- # uname -s 00:09:33.124 18:07:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.124 18:07:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.124 18:07:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.124 18:07:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.124 18:07:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.124 18:07:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.124 18:07:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.124 18:07:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.124 18:07:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.124 18:07:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.124 18:07:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:33.124 18:07:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:33.124 18:07:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.124 18:07:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.124 18:07:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:33.124 18:07:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.124 18:07:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.124 18:07:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.124 18:07:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.124 18:07:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.124 18:07:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.125 18:07:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.125 18:07:51 -- paths/export.sh@5 -- # export PATH 00:09:33.125 18:07:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.125 18:07:51 -- nvmf/common.sh@46 -- # : 0 00:09:33.125 18:07:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:33.125 18:07:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:33.125 18:07:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:33.125 18:07:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.125 18:07:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.125 18:07:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:33.125 18:07:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:33.125 18:07:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:33.125 18:07:51 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.125 18:07:51 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.125 18:07:51 -- target/bdevio.sh@14 -- # nvmftestinit 00:09:33.125 18:07:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:33.125 18:07:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.125 18:07:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:33.125 18:07:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:33.125 18:07:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:33.125 18:07:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.125 18:07:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.125 18:07:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.125 18:07:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:33.125 18:07:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:33.125 18:07:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:33.125 18:07:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:33.125 18:07:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:33.125 18:07:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:33.125 18:07:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.125 18:07:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.125 18:07:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:33.125 18:07:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:33.125 18:07:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:33.125 18:07:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:33.125 18:07:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:33.125 18:07:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.125 18:07:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:33.125 18:07:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:33.125 18:07:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:33.125 18:07:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:33.125 18:07:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:33.125 18:07:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:33.125 Cannot find device "nvmf_tgt_br" 00:09:33.125 18:07:51 -- nvmf/common.sh@154 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.125 Cannot find device "nvmf_tgt_br2" 00:09:33.125 18:07:51 -- nvmf/common.sh@155 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:33.125 18:07:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:33.125 Cannot find device "nvmf_tgt_br" 00:09:33.125 18:07:51 -- nvmf/common.sh@157 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:33.125 Cannot find device "nvmf_tgt_br2" 00:09:33.125 18:07:51 -- nvmf/common.sh@158 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:33.125 18:07:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:33.125 18:07:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.125 18:07:51 -- nvmf/common.sh@161 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:33.125 18:07:51 -- nvmf/common.sh@162 -- # true 00:09:33.125 18:07:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:33.125 18:07:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:33.125 18:07:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:33.125 18:07:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:33.125 18:07:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:33.385 18:07:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:33.385 18:07:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:33.385 18:07:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:33.385 18:07:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:33.385 18:07:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:33.385 18:07:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:33.385 18:07:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:33.385 18:07:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:33.385 18:07:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:33.385 18:07:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:33.385 18:07:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:33.385 18:07:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:33.385 18:07:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:33.385 18:07:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:33.385 18:07:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:33.385 18:07:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:33.385 18:07:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:33.385 18:07:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:33.385 18:07:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:33.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:33.385 00:09:33.385 --- 10.0.0.2 ping statistics --- 00:09:33.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.385 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:33.385 18:07:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:33.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:33.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:33.385 00:09:33.385 --- 10.0.0.3 ping statistics --- 00:09:33.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.385 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:33.385 18:07:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:33.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:33.385 00:09:33.385 --- 10.0.0.1 ping statistics --- 00:09:33.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.385 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:33.385 18:07:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.385 18:07:51 -- nvmf/common.sh@421 -- # return 0 00:09:33.385 18:07:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:33.385 18:07:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.385 18:07:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:33.385 18:07:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:33.385 18:07:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.385 18:07:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:33.385 18:07:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:33.385 18:07:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:33.385 18:07:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:33.385 18:07:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.386 18:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:33.386 18:07:51 -- nvmf/common.sh@469 -- # nvmfpid=64370 00:09:33.386 18:07:51 -- nvmf/common.sh@470 -- # waitforlisten 64370 00:09:33.386 18:07:51 -- common/autotest_common.sh@829 -- # '[' -z 64370 ']' 00:09:33.386 18:07:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:09:33.386 18:07:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.386 18:07:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.386 18:07:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.386 18:07:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.386 18:07:51 -- common/autotest_common.sh@10 -- # set +x 00:09:33.386 [2024-11-18 18:07:51.930977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:33.386 [2024-11-18 18:07:51.931087] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:09:33.646 [2024-11-18 18:07:52.082799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.646 [2024-11-18 18:07:52.182250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.646 [2024-11-18 18:07:52.182376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.646 [2024-11-18 18:07:52.182387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.646 [2024-11-18 18:07:52.182394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.646 [2024-11-18 18:07:52.182580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:33.646 [2024-11-18 18:07:52.183001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:33.646 [2024-11-18 18:07:52.183160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:33.646 [2024-11-18 18:07:52.183400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.584 18:07:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.584 18:07:52 -- common/autotest_common.sh@862 -- # return 0 00:09:34.584 18:07:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:34.584 18:07:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 18:07:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.584 18:07:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.584 18:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 [2024-11-18 18:07:52.903684] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.584 18:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.584 18:07:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.584 18:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 Malloc0 00:09:34.584 18:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.584 18:07:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.584 18:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 18:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.584 18:07:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.584 18:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 18:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.584 18:07:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.584 18:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.584 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:09:34.584 [2024-11-18 18:07:52.941479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.584 18:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.584 18:07:52 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:09:34.584 18:07:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:34.584 18:07:52 -- nvmf/common.sh@520 -- # config=() 00:09:34.584 18:07:52 -- nvmf/common.sh@520 -- # local subsystem config 00:09:34.584 18:07:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:34.584 18:07:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:34.584 { 00:09:34.584 "params": { 00:09:34.584 "name": "Nvme$subsystem", 00:09:34.584 "trtype": "$TEST_TRANSPORT", 00:09:34.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.584 "adrfam": "ipv4", 00:09:34.584 "trsvcid": "$NVMF_PORT", 00:09:34.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.584 "hdgst": ${hdgst:-false}, 00:09:34.584 "ddgst": ${ddgst:-false} 00:09:34.584 }, 00:09:34.584 "method": "bdev_nvme_attach_controller" 00:09:34.584 } 00:09:34.584 EOF 00:09:34.584 )") 00:09:34.584 18:07:52 -- nvmf/common.sh@542 -- # cat 00:09:34.584 18:07:52 -- nvmf/common.sh@544 -- # jq . 00:09:34.584 18:07:52 -- nvmf/common.sh@545 -- # IFS=, 00:09:34.584 18:07:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:34.584 "params": { 00:09:34.584 "name": "Nvme1", 00:09:34.584 "trtype": "tcp", 00:09:34.584 "traddr": "10.0.0.2", 00:09:34.584 "adrfam": "ipv4", 00:09:34.584 "trsvcid": "4420", 00:09:34.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.584 "hdgst": false, 00:09:34.584 "ddgst": false 00:09:34.584 }, 00:09:34.584 "method": "bdev_nvme_attach_controller" 00:09:34.584 }' 00:09:34.584 [2024-11-18 18:07:52.995358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.584 [2024-11-18 18:07:52.995452] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64406 ] 00:09:34.584 [2024-11-18 18:07:53.138715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:34.843 [2024-11-18 18:07:53.245111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.844 [2024-11-18 18:07:53.245565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.844 [2024-11-18 18:07:53.245581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.844 [2024-11-18 18:07:53.394381] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:34.844 [2024-11-18 18:07:53.394425] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:34.844 I/O targets: 00:09:34.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:34.844 00:09:34.844 00:09:34.844 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.844 http://cunit.sourceforge.net/ 00:09:34.844 00:09:34.844 00:09:34.844 Suite: bdevio tests on: Nvme1n1 00:09:34.844 Test: blockdev write read block ...passed 00:09:34.844 Test: blockdev write zeroes read block ...passed 00:09:34.844 Test: blockdev write zeroes read no split ...passed 00:09:34.844 Test: blockdev write zeroes read split ...passed 00:09:34.844 Test: blockdev write zeroes read split partial ...passed 00:09:34.844 Test: blockdev reset ...[2024-11-18 18:07:53.432987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:34.844 [2024-11-18 18:07:53.433101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x577680 (9): Bad file descriptor 00:09:34.844 [2024-11-18 18:07:53.444496] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:34.844 passed 00:09:34.844 Test: blockdev write read 8 blocks ...passed 00:09:35.104 Test: blockdev write read size > 128k ...passed 00:09:35.104 Test: blockdev write read invalid size ...passed 00:09:35.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:35.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:35.104 Test: blockdev write read max offset ...passed 00:09:35.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:35.104 Test: blockdev writev readv 8 blocks ...passed 00:09:35.104 Test: blockdev writev readv 30 x 1block ...passed 00:09:35.104 Test: blockdev writev readv block ...passed 00:09:35.104 Test: blockdev writev readv size > 128k ...passed 00:09:35.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:35.104 Test: blockdev comparev and writev ...[2024-11-18 18:07:53.452481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.452550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.452573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.452600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.452899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.452926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.452944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.452955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.453368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.453398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.453417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.453427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.453848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.453880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.453898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:35.104 [2024-11-18 18:07:53.453908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:35.104 passed 00:09:35.104 Test: blockdev nvme passthru rw ...passed 00:09:35.104 Test: blockdev nvme passthru vendor specific ...[2024-11-18 18:07:53.454918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.104 [2024-11-18 18:07:53.454943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.455044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.104 [2024-11-18 18:07:53.455061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.455168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.104 [2024-11-18 18:07:53.455193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:35.104 [2024-11-18 18:07:53.455293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:35.104 [2024-11-18 18:07:53.455318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:35.104 passed 00:09:35.104 Test: blockdev nvme admin passthru ...passed 00:09:35.104 Test: blockdev copy ...passed 00:09:35.104 00:09:35.104 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.104 suites 1 1 n/a 0 0 00:09:35.104 tests 23 23 23 0 0 00:09:35.104 asserts 152 152 152 0 n/a 00:09:35.104 00:09:35.104 Elapsed time = 0.160 seconds 00:09:35.364 18:07:53 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.364 18:07:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.364 18:07:53 -- common/autotest_common.sh@10 -- # set +x 00:09:35.364 18:07:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.364 18:07:53 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:35.364 18:07:53 -- target/bdevio.sh@30 -- # nvmftestfini 00:09:35.364 18:07:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:35.364 18:07:53 -- nvmf/common.sh@116 -- # sync 00:09:35.364 18:07:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:35.364 18:07:53 -- nvmf/common.sh@119 -- # set +e 00:09:35.364 18:07:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:35.364 18:07:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:35.364 rmmod nvme_tcp 00:09:35.364 rmmod nvme_fabrics 00:09:35.364 rmmod nvme_keyring 00:09:35.364 18:07:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:35.364 18:07:53 -- nvmf/common.sh@123 -- # set -e 00:09:35.364 18:07:53 -- nvmf/common.sh@124 -- # return 0 00:09:35.364 18:07:53 -- nvmf/common.sh@477 -- # '[' -n 64370 ']' 00:09:35.364 18:07:53 -- nvmf/common.sh@478 -- # killprocess 64370 00:09:35.364 18:07:53 -- common/autotest_common.sh@936 -- # '[' -z 64370 ']' 00:09:35.364 18:07:53 -- common/autotest_common.sh@940 -- # kill -0 64370 00:09:35.364 18:07:53 -- common/autotest_common.sh@941 -- # uname 00:09:35.364 18:07:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.364 18:07:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64370 00:09:35.364 18:07:53 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:09:35.364 18:07:53 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:09:35.364 18:07:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64370' 00:09:35.364 killing process with pid 64370 00:09:35.364 18:07:53 -- common/autotest_common.sh@955 -- # kill 64370 00:09:35.364 18:07:53 -- common/autotest_common.sh@960 -- # wait 64370 00:09:35.932 18:07:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:35.932 18:07:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:35.932 18:07:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:35.932 18:07:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.932 18:07:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:35.932 18:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.932 18:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.932 18:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.932 18:07:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:35.932 00:09:35.932 real 0m3.055s 00:09:35.932 user 0m9.655s 00:09:35.932 sys 0m1.113s 00:09:35.932 18:07:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.932 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.932 ************************************ 00:09:35.932 END TEST nvmf_bdevio_no_huge 00:09:35.932 ************************************ 00:09:35.932 18:07:54 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:35.932 18:07:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:35.932 18:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.932 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:35.932 ************************************ 00:09:35.932 START TEST nvmf_tls 00:09:35.932 ************************************ 00:09:35.932 18:07:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:09:35.932 * Looking for test storage... 00:09:35.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.932 18:07:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:35.932 18:07:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:35.932 18:07:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:36.192 18:07:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:36.192 18:07:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:36.192 18:07:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:36.192 18:07:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:36.192 18:07:54 -- scripts/common.sh@335 -- # IFS=.-: 00:09:36.192 18:07:54 -- scripts/common.sh@335 -- # read -ra ver1 00:09:36.192 18:07:54 -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.192 18:07:54 -- scripts/common.sh@336 -- # read -ra ver2 00:09:36.192 18:07:54 -- scripts/common.sh@337 -- # local 'op=<' 00:09:36.192 18:07:54 -- scripts/common.sh@339 -- # ver1_l=2 00:09:36.192 18:07:54 -- scripts/common.sh@340 -- # ver2_l=1 00:09:36.192 18:07:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:36.192 18:07:54 -- scripts/common.sh@343 -- # case "$op" in 00:09:36.192 18:07:54 -- scripts/common.sh@344 -- # : 1 00:09:36.192 18:07:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:36.192 18:07:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.192 18:07:54 -- scripts/common.sh@364 -- # decimal 1 00:09:36.192 18:07:54 -- scripts/common.sh@352 -- # local d=1 00:09:36.192 18:07:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.192 18:07:54 -- scripts/common.sh@354 -- # echo 1 00:09:36.192 18:07:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:36.192 18:07:54 -- scripts/common.sh@365 -- # decimal 2 00:09:36.192 18:07:54 -- scripts/common.sh@352 -- # local d=2 00:09:36.192 18:07:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.192 18:07:54 -- scripts/common.sh@354 -- # echo 2 00:09:36.192 18:07:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:36.192 18:07:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:36.192 18:07:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:36.192 18:07:54 -- scripts/common.sh@367 -- # return 0 00:09:36.192 18:07:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.192 18:07:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.192 --rc genhtml_branch_coverage=1 00:09:36.192 --rc genhtml_function_coverage=1 00:09:36.192 --rc genhtml_legend=1 00:09:36.192 --rc geninfo_all_blocks=1 00:09:36.192 --rc geninfo_unexecuted_blocks=1 00:09:36.192 00:09:36.192 ' 00:09:36.192 18:07:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.192 --rc genhtml_branch_coverage=1 00:09:36.192 --rc genhtml_function_coverage=1 00:09:36.192 --rc genhtml_legend=1 00:09:36.192 --rc geninfo_all_blocks=1 00:09:36.192 --rc geninfo_unexecuted_blocks=1 00:09:36.192 00:09:36.192 ' 00:09:36.192 18:07:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.192 --rc genhtml_branch_coverage=1 00:09:36.192 --rc genhtml_function_coverage=1 00:09:36.192 --rc genhtml_legend=1 00:09:36.192 --rc geninfo_all_blocks=1 00:09:36.192 --rc geninfo_unexecuted_blocks=1 00:09:36.192 00:09:36.192 ' 00:09:36.192 18:07:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:36.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.192 --rc genhtml_branch_coverage=1 00:09:36.192 --rc genhtml_function_coverage=1 00:09:36.192 --rc genhtml_legend=1 00:09:36.192 --rc geninfo_all_blocks=1 00:09:36.192 --rc geninfo_unexecuted_blocks=1 00:09:36.192 00:09:36.192 ' 00:09:36.192 18:07:54 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:36.192 18:07:54 -- nvmf/common.sh@7 -- # uname -s 00:09:36.192 18:07:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.192 18:07:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.192 18:07:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.192 18:07:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.192 18:07:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.192 18:07:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.192 18:07:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.192 18:07:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.192 18:07:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.192 18:07:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:36.192 18:07:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:09:36.192 18:07:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.192 18:07:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.192 18:07:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:36.192 18:07:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.192 18:07:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.192 18:07:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.192 18:07:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.192 18:07:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.192 18:07:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.192 18:07:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.192 18:07:54 -- paths/export.sh@5 -- # export PATH 00:09:36.192 18:07:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.192 18:07:54 -- nvmf/common.sh@46 -- # : 0 00:09:36.192 18:07:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:36.192 18:07:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:36.192 18:07:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:36.192 18:07:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.192 18:07:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.192 18:07:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:36.192 18:07:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:36.192 18:07:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:36.192 18:07:54 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.192 18:07:54 -- target/tls.sh@71 -- # nvmftestinit 00:09:36.192 18:07:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:36.192 18:07:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.192 18:07:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:36.192 18:07:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:36.192 18:07:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:36.192 18:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.192 18:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.192 18:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.192 18:07:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:36.192 18:07:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:36.192 18:07:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.192 18:07:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.192 18:07:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:36.192 18:07:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:36.192 18:07:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:36.192 18:07:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:36.192 18:07:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:36.192 18:07:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.192 18:07:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:36.192 18:07:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:36.192 18:07:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:36.192 18:07:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:36.192 18:07:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:36.192 18:07:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:36.192 Cannot find device "nvmf_tgt_br" 00:09:36.192 18:07:54 -- nvmf/common.sh@154 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.193 Cannot find device "nvmf_tgt_br2" 00:09:36.193 18:07:54 -- nvmf/common.sh@155 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:36.193 18:07:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:36.193 Cannot find device "nvmf_tgt_br" 00:09:36.193 18:07:54 -- nvmf/common.sh@157 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:36.193 Cannot find device "nvmf_tgt_br2" 00:09:36.193 18:07:54 -- nvmf/common.sh@158 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:36.193 18:07:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:36.193 18:07:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.193 18:07:54 -- nvmf/common.sh@161 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.193 18:07:54 -- nvmf/common.sh@162 -- # true 00:09:36.193 18:07:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.193 18:07:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.193 18:07:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.193 18:07:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.477 18:07:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.477 18:07:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.477 18:07:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.477 18:07:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.477 18:07:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.477 18:07:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:36.477 18:07:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:36.477 18:07:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:36.477 18:07:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:36.477 18:07:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.477 18:07:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.477 18:07:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.477 18:07:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:36.477 18:07:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:36.477 18:07:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.477 18:07:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.477 18:07:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.477 18:07:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.477 18:07:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.477 18:07:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:36.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:09:36.478 00:09:36.478 --- 10.0.0.2 ping statistics --- 00:09:36.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.478 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:36.478 18:07:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:36.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:09:36.478 00:09:36.478 --- 10.0.0.3 ping statistics --- 00:09:36.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.478 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:36.478 18:07:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:36.478 00:09:36.478 --- 10.0.0.1 ping statistics --- 00:09:36.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.478 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:36.478 18:07:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.478 18:07:54 -- nvmf/common.sh@421 -- # return 0 00:09:36.478 18:07:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.478 18:07:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.478 18:07:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.478 18:07:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.478 18:07:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.478 18:07:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.478 18:07:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.478 18:07:54 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:09:36.478 18:07:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.478 18:07:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.478 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:36.478 18:07:54 -- nvmf/common.sh@469 -- # nvmfpid=64592 00:09:36.478 18:07:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:09:36.478 18:07:54 -- nvmf/common.sh@470 -- # waitforlisten 64592 00:09:36.478 18:07:54 -- common/autotest_common.sh@829 -- # '[' -z 64592 ']' 00:09:36.478 18:07:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.478 18:07:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.478 18:07:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.478 18:07:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.478 18:07:54 -- common/autotest_common.sh@10 -- # set +x 00:09:36.478 [2024-11-18 18:07:55.026724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.478 [2024-11-18 18:07:55.026823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.775 [2024-11-18 18:07:55.164558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.775 [2024-11-18 18:07:55.232422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.775 [2024-11-18 18:07:55.232612] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.775 [2024-11-18 18:07:55.232630] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.775 [2024-11-18 18:07:55.232641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.775 [2024-11-18 18:07:55.232676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.343 18:07:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.343 18:07:55 -- common/autotest_common.sh@862 -- # return 0 00:09:37.343 18:07:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.343 18:07:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.343 18:07:55 -- common/autotest_common.sh@10 -- # set +x 00:09:37.602 18:07:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.602 18:07:55 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:09:37.602 18:07:55 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:09:37.861 true 00:09:37.861 18:07:56 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:37.861 18:07:56 -- target/tls.sh@82 -- # jq -r .tls_version 00:09:38.120 18:07:56 -- target/tls.sh@82 -- # version=0 00:09:38.120 18:07:56 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:09:38.120 18:07:56 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:38.380 18:07:56 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:38.380 18:07:56 -- target/tls.sh@90 -- # jq -r .tls_version 00:09:38.639 18:07:57 -- target/tls.sh@90 -- # version=13 00:09:38.639 18:07:57 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:09:38.639 18:07:57 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:09:38.898 18:07:57 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:38.898 18:07:57 -- target/tls.sh@98 -- # jq -r .tls_version 00:09:39.156 18:07:57 -- target/tls.sh@98 -- # version=7 00:09:39.156 18:07:57 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:09:39.156 18:07:57 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:39.156 18:07:57 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:09:39.156 18:07:57 -- target/tls.sh@105 -- # ktls=false 00:09:39.156 18:07:57 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:09:39.156 18:07:57 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:09:39.414 18:07:57 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:39.414 18:07:57 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:09:39.673 18:07:58 -- target/tls.sh@113 -- # ktls=true 00:09:39.673 18:07:58 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:09:39.673 18:07:58 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:09:39.932 18:07:58 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:09:39.932 18:07:58 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:09:40.190 18:07:58 -- target/tls.sh@121 -- # ktls=false 00:09:40.190 18:07:58 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:09:40.190 18:07:58 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:09:40.190 18:07:58 -- target/tls.sh@49 -- # local key hash crc 00:09:40.190 18:07:58 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:09:40.190 18:07:58 -- target/tls.sh@51 -- # hash=01 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # gzip -1 -c 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # tail -c8 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # head -c 4 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # crc='p$H�' 00:09:40.190 18:07:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:40.190 18:07:58 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:09:40.190 18:07:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:40.190 18:07:58 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:40.190 18:07:58 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:09:40.190 18:07:58 -- target/tls.sh@49 -- # local key hash crc 00:09:40.190 18:07:58 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:09:40.190 18:07:58 -- target/tls.sh@51 -- # hash=01 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # gzip -1 -c 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # tail -c8 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # head -c 4 00:09:40.190 18:07:58 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:09:40.190 18:07:58 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:09:40.190 18:07:58 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:09:40.191 18:07:58 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:40.191 18:07:58 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:40.191 18:07:58 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:40.191 18:07:58 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:40.191 18:07:58 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:09:40.191 18:07:58 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:09:40.191 18:07:58 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:40.191 18:07:58 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:09:40.191 18:07:58 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:09:40.450 18:07:58 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:09:40.708 18:07:59 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:40.708 18:07:59 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:40.708 18:07:59 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:09:40.967 [2024-11-18 18:07:59.476513] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.967 18:07:59 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:09:41.226 18:07:59 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:09:41.485 [2024-11-18 18:07:59.916636] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:09:41.485 [2024-11-18 18:07:59.916866] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.485 18:07:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:09:41.744 malloc0 00:09:41.744 18:08:00 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:42.003 18:08:00 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:42.314 18:08:00 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:52.292 Initializing NVMe Controllers 00:09:52.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.292 Initialization complete. Launching workers. 00:09:52.292 ======================================================== 00:09:52.292 Latency(us) 00:09:52.292 Device Information : IOPS MiB/s Average min max 00:09:52.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11133.40 43.49 5749.44 1555.95 8255.84 00:09:52.292 ======================================================== 00:09:52.292 Total : 11133.40 43.49 5749.44 1555.95 8255.84 00:09:52.292 00:09:52.292 18:08:10 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:52.292 18:08:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:09:52.292 18:08:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:09:52.292 18:08:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:09:52.293 18:08:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:09:52.293 18:08:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:52.293 18:08:10 -- target/tls.sh@28 -- # bdevperf_pid=64840 00:09:52.293 18:08:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:52.293 18:08:10 -- target/tls.sh@31 -- # waitforlisten 64840 /var/tmp/bdevperf.sock 00:09:52.293 18:08:10 -- common/autotest_common.sh@829 -- # '[' -z 64840 ']' 00:09:52.293 18:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.293 18:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.293 18:08:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:09:52.293 18:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.293 18:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.293 18:08:10 -- common/autotest_common.sh@10 -- # set +x 00:09:52.293 [2024-11-18 18:08:10.889169] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:52.293 [2024-11-18 18:08:10.889282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64840 ] 00:09:52.551 [2024-11-18 18:08:11.030163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.551 [2024-11-18 18:08:11.097142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.488 18:08:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.488 18:08:11 -- common/autotest_common.sh@862 -- # return 0 00:09:53.488 18:08:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:09:53.746 [2024-11-18 18:08:12.113583] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:09:53.746 TLSTESTn1 00:09:53.746 18:08:12 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:09:53.746 Running I/O for 10 seconds... 00:10:03.751 00:10:03.751 Latency(us) 00:10:03.751 [2024-11-18T18:08:22.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.751 [2024-11-18T18:08:22.355Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:03.751 Verification LBA range: start 0x0 length 0x2000 00:10:03.751 TLSTESTn1 : 10.01 6276.05 24.52 0.00 0.00 20363.76 4855.62 24784.52 00:10:03.751 [2024-11-18T18:08:22.355Z] =================================================================================================================== 00:10:03.751 [2024-11-18T18:08:22.355Z] Total : 6276.05 24.52 0.00 0.00 20363.76 4855.62 24784.52 00:10:03.751 0 00:10:03.751 18:08:22 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.751 18:08:22 -- target/tls.sh@45 -- # killprocess 64840 00:10:03.751 18:08:22 -- common/autotest_common.sh@936 -- # '[' -z 64840 ']' 00:10:03.751 18:08:22 -- common/autotest_common.sh@940 -- # kill -0 64840 00:10:03.751 18:08:22 -- common/autotest_common.sh@941 -- # uname 00:10:03.751 18:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.751 18:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64840 00:10:04.010 18:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:04.010 18:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:04.010 18:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64840' 00:10:04.010 killing process with pid 64840 00:10:04.010 18:08:22 -- common/autotest_common.sh@955 -- # kill 64840 00:10:04.010 Received shutdown signal, test time was about 10.000000 seconds 00:10:04.010 00:10:04.010 Latency(us) 00:10:04.010 [2024-11-18T18:08:22.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.010 [2024-11-18T18:08:22.614Z] =================================================================================================================== 00:10:04.010 [2024-11-18T18:08:22.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:04.010 18:08:22 -- common/autotest_common.sh@960 -- # wait 64840 00:10:04.010 18:08:22 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:04.010 18:08:22 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.010 18:08:22 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:04.010 18:08:22 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:04.010 18:08:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.010 18:08:22 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:04.010 18:08:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.010 18:08:22 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:04.010 18:08:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:04.010 18:08:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:04.010 18:08:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:04.010 18:08:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:04.010 18:08:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:04.010 18:08:22 -- target/tls.sh@28 -- # bdevperf_pid=64968 00:10:04.010 18:08:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:04.010 18:08:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:04.010 18:08:22 -- target/tls.sh@31 -- # waitforlisten 64968 /var/tmp/bdevperf.sock 00:10:04.010 18:08:22 -- common/autotest_common.sh@829 -- # '[' -z 64968 ']' 00:10:04.010 18:08:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:04.010 18:08:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:04.010 18:08:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:04.010 18:08:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:04.010 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:10:04.010 [2024-11-18 18:08:22.608320] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:04.010 [2024-11-18 18:08:22.608622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64968 ] 00:10:04.269 [2024-11-18 18:08:22.747269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.269 [2024-11-18 18:08:22.797296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.203 18:08:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.203 18:08:23 -- common/autotest_common.sh@862 -- # return 0 00:10:05.203 18:08:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:05.203 [2024-11-18 18:08:23.804031] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:05.462 [2024-11-18 18:08:23.813304] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:05.462 [2024-11-18 18:08:23.813915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173650 (107): Transport endpoint is not connected 00:10:05.462 [2024-11-18 18:08:23.814899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2173650 (9): Bad file descriptor 00:10:05.462 [2024-11-18 18:08:23.815895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:05.462 [2024-11-18 18:08:23.815914] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:05.462 [2024-11-18 18:08:23.815923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:05.462 request: 00:10:05.462 { 00:10:05.462 "name": "TLSTEST", 00:10:05.462 "trtype": "tcp", 00:10:05.462 "traddr": "10.0.0.2", 00:10:05.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.462 "adrfam": "ipv4", 00:10:05.462 "trsvcid": "4420", 00:10:05.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.462 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:05.462 "method": "bdev_nvme_attach_controller", 00:10:05.462 "req_id": 1 00:10:05.462 } 00:10:05.462 Got JSON-RPC error response 00:10:05.462 response: 00:10:05.462 { 00:10:05.462 "code": -32602, 00:10:05.462 "message": "Invalid parameters" 00:10:05.462 } 00:10:05.462 18:08:23 -- target/tls.sh@36 -- # killprocess 64968 00:10:05.462 18:08:23 -- common/autotest_common.sh@936 -- # '[' -z 64968 ']' 00:10:05.462 18:08:23 -- common/autotest_common.sh@940 -- # kill -0 64968 00:10:05.462 18:08:23 -- common/autotest_common.sh@941 -- # uname 00:10:05.462 18:08:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.462 18:08:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64968 00:10:05.462 killing process with pid 64968 00:10:05.462 Received shutdown signal, test time was about 10.000000 seconds 00:10:05.462 00:10:05.462 Latency(us) 00:10:05.462 [2024-11-18T18:08:24.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.462 [2024-11-18T18:08:24.066Z] =================================================================================================================== 00:10:05.462 [2024-11-18T18:08:24.066Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:05.462 18:08:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:05.462 18:08:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:05.462 18:08:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64968' 00:10:05.462 18:08:23 -- common/autotest_common.sh@955 -- # kill 64968 00:10:05.462 18:08:23 -- common/autotest_common.sh@960 -- # wait 64968 00:10:05.462 18:08:24 -- target/tls.sh@37 -- # return 1 00:10:05.462 18:08:24 -- common/autotest_common.sh@653 -- # es=1 00:10:05.462 18:08:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.462 18:08:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.462 18:08:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.462 18:08:24 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:05.462 18:08:24 -- common/autotest_common.sh@650 -- # local es=0 00:10:05.462 18:08:24 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:05.462 18:08:24 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:05.462 18:08:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.462 18:08:24 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:05.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:05.462 18:08:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.462 18:08:24 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:05.462 18:08:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:05.462 18:08:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:05.462 18:08:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:05.462 18:08:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:05.462 18:08:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:05.462 18:08:24 -- target/tls.sh@28 -- # bdevperf_pid=65001 00:10:05.462 18:08:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:05.462 18:08:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:05.462 18:08:24 -- target/tls.sh@31 -- # waitforlisten 65001 /var/tmp/bdevperf.sock 00:10:05.462 18:08:24 -- common/autotest_common.sh@829 -- # '[' -z 65001 ']' 00:10:05.462 18:08:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:05.462 18:08:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.462 18:08:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:05.462 18:08:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.462 18:08:24 -- common/autotest_common.sh@10 -- # set +x 00:10:05.721 [2024-11-18 18:08:24.103468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:05.721 [2024-11-18 18:08:24.103781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65001 ] 00:10:05.721 [2024-11-18 18:08:24.237265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.721 [2024-11-18 18:08:24.287652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.656 18:08:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.656 18:08:25 -- common/autotest_common.sh@862 -- # return 0 00:10:06.656 18:08:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:06.915 [2024-11-18 18:08:25.280331] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:06.915 [2024-11-18 18:08:25.285021] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:06.915 [2024-11-18 18:08:25.285059] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:06.915 [2024-11-18 18:08:25.285124] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:06.915 [2024-11-18 18:08:25.285797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd20650 (107): Transport endpoint is not connected 00:10:06.915 [2024-11-18 18:08:25.286774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd20650 (9): Bad file descriptor 00:10:06.915 [2024-11-18 18:08:25.287768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:06.915 [2024-11-18 18:08:25.287791] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:06.915 [2024-11-18 18:08:25.287801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:06.915 request: 00:10:06.915 { 00:10:06.915 "name": "TLSTEST", 00:10:06.915 "trtype": "tcp", 00:10:06.915 "traddr": "10.0.0.2", 00:10:06.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:06.915 "adrfam": "ipv4", 00:10:06.915 "trsvcid": "4420", 00:10:06.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.915 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:06.915 "method": "bdev_nvme_attach_controller", 00:10:06.915 "req_id": 1 00:10:06.915 } 00:10:06.915 Got JSON-RPC error response 00:10:06.915 response: 00:10:06.915 { 00:10:06.915 "code": -32602, 00:10:06.915 "message": "Invalid parameters" 00:10:06.915 } 00:10:06.915 18:08:25 -- target/tls.sh@36 -- # killprocess 65001 00:10:06.915 18:08:25 -- common/autotest_common.sh@936 -- # '[' -z 65001 ']' 00:10:06.915 18:08:25 -- common/autotest_common.sh@940 -- # kill -0 65001 00:10:06.915 18:08:25 -- common/autotest_common.sh@941 -- # uname 00:10:06.915 18:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.915 18:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65001 00:10:06.915 killing process with pid 65001 00:10:06.915 Received shutdown signal, test time was about 10.000000 seconds 00:10:06.915 00:10:06.915 Latency(us) 00:10:06.915 [2024-11-18T18:08:25.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.915 [2024-11-18T18:08:25.519Z] =================================================================================================================== 00:10:06.915 [2024-11-18T18:08:25.519Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:06.915 18:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:06.915 18:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:06.915 18:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65001' 00:10:06.915 18:08:25 -- common/autotest_common.sh@955 -- # kill 65001 00:10:06.915 18:08:25 -- common/autotest_common.sh@960 -- # wait 65001 00:10:07.174 18:08:25 -- target/tls.sh@37 -- # return 1 00:10:07.174 18:08:25 -- common/autotest_common.sh@653 -- # es=1 00:10:07.174 18:08:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:07.174 18:08:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:07.174 18:08:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:07.174 18:08:25 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:07.174 18:08:25 -- common/autotest_common.sh@650 -- # local es=0 00:10:07.174 18:08:25 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:07.174 18:08:25 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:07.175 18:08:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.175 18:08:25 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:07.175 18:08:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.175 18:08:25 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:07.175 18:08:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:07.175 18:08:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:07.175 18:08:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:07.175 18:08:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:07.175 18:08:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.175 18:08:25 -- target/tls.sh@28 -- # bdevperf_pid=65023 00:10:07.175 18:08:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:07.175 18:08:25 -- target/tls.sh@31 -- # waitforlisten 65023 /var/tmp/bdevperf.sock 00:10:07.175 18:08:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:07.175 18:08:25 -- common/autotest_common.sh@829 -- # '[' -z 65023 ']' 00:10:07.175 18:08:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:07.175 18:08:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.175 18:08:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:07.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:07.175 18:08:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.175 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:10:07.175 [2024-11-18 18:08:25.563863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:07.175 [2024-11-18 18:08:25.564595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65023 ] 00:10:07.175 [2024-11-18 18:08:25.693890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.175 [2024-11-18 18:08:25.745422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.110 18:08:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.110 18:08:26 -- common/autotest_common.sh@862 -- # return 0 00:10:08.110 18:08:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:08.369 [2024-11-18 18:08:26.775861] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:08.369 [2024-11-18 18:08:26.783411] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:08.369 [2024-11-18 18:08:26.783448] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:08.369 [2024-11-18 18:08:26.783512] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:08.369 [2024-11-18 18:08:26.784385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa2650 (107): Transport endpoint is not connected 00:10:08.369 [2024-11-18 18:08:26.785376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa2650 (9): Bad file descriptor 00:10:08.369 [2024-11-18 18:08:26.786373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:08.369 [2024-11-18 18:08:26.786396] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:08.369 [2024-11-18 18:08:26.786422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:08.369 request: 00:10:08.369 { 00:10:08.369 "name": "TLSTEST", 00:10:08.369 "trtype": "tcp", 00:10:08.369 "traddr": "10.0.0.2", 00:10:08.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.369 "adrfam": "ipv4", 00:10:08.369 "trsvcid": "4420", 00:10:08.369 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:08.369 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:08.369 "method": "bdev_nvme_attach_controller", 00:10:08.369 "req_id": 1 00:10:08.369 } 00:10:08.369 Got JSON-RPC error response 00:10:08.369 response: 00:10:08.369 { 00:10:08.369 "code": -32602, 00:10:08.369 "message": "Invalid parameters" 00:10:08.369 } 00:10:08.369 18:08:26 -- target/tls.sh@36 -- # killprocess 65023 00:10:08.369 18:08:26 -- common/autotest_common.sh@936 -- # '[' -z 65023 ']' 00:10:08.369 18:08:26 -- common/autotest_common.sh@940 -- # kill -0 65023 00:10:08.369 18:08:26 -- common/autotest_common.sh@941 -- # uname 00:10:08.370 18:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.370 18:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65023 00:10:08.370 killing process with pid 65023 00:10:08.370 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.370 00:10:08.370 Latency(us) 00:10:08.370 [2024-11-18T18:08:26.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.370 [2024-11-18T18:08:26.974Z] =================================================================================================================== 00:10:08.370 [2024-11-18T18:08:26.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:08.370 18:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:08.370 18:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:08.370 18:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65023' 00:10:08.370 18:08:26 -- common/autotest_common.sh@955 -- # kill 65023 00:10:08.370 18:08:26 -- common/autotest_common.sh@960 -- # wait 65023 00:10:08.628 18:08:27 -- target/tls.sh@37 -- # return 1 00:10:08.628 18:08:27 -- common/autotest_common.sh@653 -- # es=1 00:10:08.628 18:08:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.628 18:08:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:08.628 18:08:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.628 18:08:27 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:08.628 18:08:27 -- common/autotest_common.sh@650 -- # local es=0 00:10:08.628 18:08:27 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:08.628 18:08:27 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:08.628 18:08:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.629 18:08:27 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:08.629 18:08:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.629 18:08:27 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:08.629 18:08:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:08.629 18:08:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:08.629 18:08:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:08.629 18:08:27 -- target/tls.sh@23 -- # psk= 00:10:08.629 18:08:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:08.629 18:08:27 -- target/tls.sh@28 -- # bdevperf_pid=65051 00:10:08.629 18:08:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:08.629 18:08:27 -- target/tls.sh@31 -- # waitforlisten 65051 /var/tmp/bdevperf.sock 00:10:08.629 18:08:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:08.629 18:08:27 -- common/autotest_common.sh@829 -- # '[' -z 65051 ']' 00:10:08.629 18:08:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:08.629 18:08:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.629 18:08:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:08.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:08.629 18:08:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.629 18:08:27 -- common/autotest_common.sh@10 -- # set +x 00:10:08.629 [2024-11-18 18:08:27.059210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.629 [2024-11-18 18:08:27.059448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65051 ] 00:10:08.629 [2024-11-18 18:08:27.192280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.887 [2024-11-18 18:08:27.243805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.454 18:08:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.455 18:08:28 -- common/autotest_common.sh@862 -- # return 0 00:10:09.455 18:08:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:09.714 [2024-11-18 18:08:28.233606] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:09.714 [2024-11-18 18:08:28.235927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f75010 (9): Bad file descriptor 00:10:09.714 [2024-11-18 18:08:28.236908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:09.714 [2024-11-18 18:08:28.237090] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:09.714 [2024-11-18 18:08:28.237211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:09.714 request: 00:10:09.714 { 00:10:09.714 "name": "TLSTEST", 00:10:09.714 "trtype": "tcp", 00:10:09.714 "traddr": "10.0.0.2", 00:10:09.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.714 "adrfam": "ipv4", 00:10:09.714 "trsvcid": "4420", 00:10:09.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.714 "method": "bdev_nvme_attach_controller", 00:10:09.714 "req_id": 1 00:10:09.714 } 00:10:09.714 Got JSON-RPC error response 00:10:09.714 response: 00:10:09.714 { 00:10:09.714 "code": -32602, 00:10:09.714 "message": "Invalid parameters" 00:10:09.714 } 00:10:09.714 18:08:28 -- target/tls.sh@36 -- # killprocess 65051 00:10:09.714 18:08:28 -- common/autotest_common.sh@936 -- # '[' -z 65051 ']' 00:10:09.714 18:08:28 -- common/autotest_common.sh@940 -- # kill -0 65051 00:10:09.714 18:08:28 -- common/autotest_common.sh@941 -- # uname 00:10:09.714 18:08:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.714 18:08:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65051 00:10:09.714 killing process with pid 65051 00:10:09.714 Received shutdown signal, test time was about 10.000000 seconds 00:10:09.714 00:10:09.714 Latency(us) 00:10:09.714 [2024-11-18T18:08:28.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.714 [2024-11-18T18:08:28.318Z] =================================================================================================================== 00:10:09.714 [2024-11-18T18:08:28.318Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:09.714 18:08:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:09.714 18:08:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:09.714 18:08:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65051' 00:10:09.714 18:08:28 -- common/autotest_common.sh@955 -- # kill 65051 00:10:09.714 18:08:28 -- common/autotest_common.sh@960 -- # wait 65051 00:10:09.972 18:08:28 -- target/tls.sh@37 -- # return 1 00:10:09.972 18:08:28 -- common/autotest_common.sh@653 -- # es=1 00:10:09.972 18:08:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:09.972 18:08:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:09.972 18:08:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:09.972 18:08:28 -- target/tls.sh@167 -- # killprocess 64592 00:10:09.972 18:08:28 -- common/autotest_common.sh@936 -- # '[' -z 64592 ']' 00:10:09.972 18:08:28 -- common/autotest_common.sh@940 -- # kill -0 64592 00:10:09.972 18:08:28 -- common/autotest_common.sh@941 -- # uname 00:10:09.972 18:08:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:09.972 18:08:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64592 00:10:09.972 killing process with pid 64592 00:10:09.973 18:08:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:09.973 18:08:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:09.973 18:08:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64592' 00:10:09.973 18:08:28 -- common/autotest_common.sh@955 -- # kill 64592 00:10:09.973 18:08:28 -- common/autotest_common.sh@960 -- # wait 64592 00:10:10.232 18:08:28 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:10.232 18:08:28 -- target/tls.sh@49 -- # local key hash crc 00:10:10.232 18:08:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:10.232 18:08:28 -- target/tls.sh@51 -- # hash=02 00:10:10.232 18:08:28 -- target/tls.sh@52 -- # gzip -1 -c 00:10:10.232 18:08:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:10.232 18:08:28 -- target/tls.sh@52 -- # tail -c8 00:10:10.232 18:08:28 -- target/tls.sh@52 -- # head -c 4 00:10:10.232 18:08:28 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:10.232 18:08:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:10.232 18:08:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:10.232 18:08:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:10.232 18:08:28 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:10.232 18:08:28 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:10.232 18:08:28 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:10.232 18:08:28 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:10.232 18:08:28 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:10.232 18:08:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:10.232 18:08:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.232 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 18:08:28 -- nvmf/common.sh@469 -- # nvmfpid=65093 00:10:10.232 18:08:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:10.232 18:08:28 -- nvmf/common.sh@470 -- # waitforlisten 65093 00:10:10.232 18:08:28 -- common/autotest_common.sh@829 -- # '[' -z 65093 ']' 00:10:10.232 18:08:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.232 18:08:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.232 18:08:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.232 18:08:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.232 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.232 [2024-11-18 18:08:28.749567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:10.232 [2024-11-18 18:08:28.749663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.491 [2024-11-18 18:08:28.879955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.491 [2024-11-18 18:08:28.929273] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.491 [2024-11-18 18:08:28.929422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.491 [2024-11-18 18:08:28.929435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.491 [2024-11-18 18:08:28.929443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.491 [2024-11-18 18:08:28.929471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.436 18:08:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.436 18:08:29 -- common/autotest_common.sh@862 -- # return 0 00:10:11.436 18:08:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:11.436 18:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.436 18:08:29 -- common/autotest_common.sh@10 -- # set +x 00:10:11.436 18:08:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.436 18:08:29 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:11.436 18:08:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:11.436 18:08:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:11.436 [2024-11-18 18:08:29.989290] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.436 18:08:30 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:11.695 18:08:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:11.954 [2024-11-18 18:08:30.525434] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:11.954 [2024-11-18 18:08:30.525725] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.954 18:08:30 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:12.214 malloc0 00:10:12.214 18:08:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:12.474 18:08:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:12.733 18:08:31 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:12.733 18:08:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:12.733 18:08:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:12.733 18:08:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:12.733 18:08:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:12.733 18:08:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:12.733 18:08:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:12.733 18:08:31 -- target/tls.sh@28 -- # bdevperf_pid=65148 00:10:12.733 18:08:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:12.733 18:08:31 -- target/tls.sh@31 -- # waitforlisten 65148 /var/tmp/bdevperf.sock 00:10:12.733 18:08:31 -- common/autotest_common.sh@829 -- # '[' -z 65148 ']' 00:10:12.733 18:08:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.733 18:08:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.733 18:08:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.733 18:08:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.733 18:08:31 -- common/autotest_common.sh@10 -- # set +x 00:10:12.733 [2024-11-18 18:08:31.232176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.733 [2024-11-18 18:08:31.232567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65148 ] 00:10:12.992 [2024-11-18 18:08:31.369162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.992 [2024-11-18 18:08:31.436307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.951 18:08:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.951 18:08:32 -- common/autotest_common.sh@862 -- # return 0 00:10:13.951 18:08:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:13.951 [2024-11-18 18:08:32.365263] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:13.951 TLSTESTn1 00:10:13.951 18:08:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:14.217 Running I/O for 10 seconds... 00:10:24.204 00:10:24.204 Latency(us) 00:10:24.204 [2024-11-18T18:08:42.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.204 [2024-11-18T18:08:42.808Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:24.204 Verification LBA range: start 0x0 length 0x2000 00:10:24.204 TLSTESTn1 : 10.01 7094.85 27.71 0.00 0.00 18016.12 2174.60 21805.61 00:10:24.204 [2024-11-18T18:08:42.808Z] =================================================================================================================== 00:10:24.205 [2024-11-18T18:08:42.809Z] Total : 7094.85 27.71 0.00 0.00 18016.12 2174.60 21805.61 00:10:24.205 0 00:10:24.205 18:08:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.205 18:08:42 -- target/tls.sh@45 -- # killprocess 65148 00:10:24.205 18:08:42 -- common/autotest_common.sh@936 -- # '[' -z 65148 ']' 00:10:24.205 18:08:42 -- common/autotest_common.sh@940 -- # kill -0 65148 00:10:24.205 18:08:42 -- common/autotest_common.sh@941 -- # uname 00:10:24.205 18:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:24.205 18:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65148 00:10:24.205 killing process with pid 65148 00:10:24.205 Received shutdown signal, test time was about 10.000000 seconds 00:10:24.205 00:10:24.205 Latency(us) 00:10:24.205 [2024-11-18T18:08:42.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.205 [2024-11-18T18:08:42.809Z] =================================================================================================================== 00:10:24.205 [2024-11-18T18:08:42.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:24.205 18:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:24.205 18:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:24.205 18:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65148' 00:10:24.205 18:08:42 -- common/autotest_common.sh@955 -- # kill 65148 00:10:24.205 18:08:42 -- common/autotest_common.sh@960 -- # wait 65148 00:10:24.464 18:08:42 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:24.464 18:08:42 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:24.464 18:08:42 -- common/autotest_common.sh@650 -- # local es=0 00:10:24.464 18:08:42 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:24.464 18:08:42 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:24.464 18:08:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.464 18:08:42 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.464 18:08:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.464 18:08:42 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:24.464 18:08:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:24.464 18:08:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:24.464 18:08:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:24.464 18:08:42 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:24.464 18:08:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:24.464 18:08:42 -- target/tls.sh@28 -- # bdevperf_pid=65282 00:10:24.464 18:08:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:24.464 18:08:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:24.464 18:08:42 -- target/tls.sh@31 -- # waitforlisten 65282 /var/tmp/bdevperf.sock 00:10:24.464 18:08:42 -- common/autotest_common.sh@829 -- # '[' -z 65282 ']' 00:10:24.464 18:08:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.464 18:08:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.464 18:08:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.464 18:08:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.464 18:08:42 -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 [2024-11-18 18:08:42.885184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:24.464 [2024-11-18 18:08:42.885534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65282 ] 00:10:24.464 [2024-11-18 18:08:43.015929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.723 [2024-11-18 18:08:43.066892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.289 18:08:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.289 18:08:43 -- common/autotest_common.sh@862 -- # return 0 00:10:25.289 18:08:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:25.548 [2024-11-18 18:08:44.097575] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:25.548 [2024-11-18 18:08:44.097967] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:25.548 request: 00:10:25.548 { 00:10:25.548 "name": "TLSTEST", 00:10:25.548 "trtype": "tcp", 00:10:25.548 "traddr": "10.0.0.2", 00:10:25.548 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.548 "adrfam": "ipv4", 00:10:25.548 "trsvcid": "4420", 00:10:25.548 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.548 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:25.548 "method": "bdev_nvme_attach_controller", 00:10:25.548 "req_id": 1 00:10:25.548 } 00:10:25.548 Got JSON-RPC error response 00:10:25.548 response: 00:10:25.548 { 00:10:25.548 "code": -22, 00:10:25.548 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:25.548 } 00:10:25.548 18:08:44 -- target/tls.sh@36 -- # killprocess 65282 00:10:25.548 18:08:44 -- common/autotest_common.sh@936 -- # '[' -z 65282 ']' 00:10:25.548 18:08:44 -- common/autotest_common.sh@940 -- # kill -0 65282 00:10:25.548 18:08:44 -- common/autotest_common.sh@941 -- # uname 00:10:25.548 18:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.548 18:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65282 00:10:25.548 18:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:25.548 18:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:25.548 killing process with pid 65282 00:10:25.548 18:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65282' 00:10:25.548 18:08:44 -- common/autotest_common.sh@955 -- # kill 65282 00:10:25.548 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.548 00:10:25.548 Latency(us) 00:10:25.548 [2024-11-18T18:08:44.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.548 [2024-11-18T18:08:44.152Z] =================================================================================================================== 00:10:25.548 [2024-11-18T18:08:44.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:25.548 18:08:44 -- common/autotest_common.sh@960 -- # wait 65282 00:10:25.807 18:08:44 -- target/tls.sh@37 -- # return 1 00:10:25.807 18:08:44 -- common/autotest_common.sh@653 -- # es=1 00:10:25.807 18:08:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.807 18:08:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.807 18:08:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.807 18:08:44 -- target/tls.sh@183 -- # killprocess 65093 00:10:25.807 18:08:44 -- common/autotest_common.sh@936 -- # '[' -z 65093 ']' 00:10:25.807 18:08:44 -- common/autotest_common.sh@940 -- # kill -0 65093 00:10:25.807 18:08:44 -- common/autotest_common.sh@941 -- # uname 00:10:25.807 18:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:25.807 18:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65093 00:10:25.807 killing process with pid 65093 00:10:25.807 18:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:25.807 18:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:25.807 18:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65093' 00:10:25.807 18:08:44 -- common/autotest_common.sh@955 -- # kill 65093 00:10:25.807 18:08:44 -- common/autotest_common.sh@960 -- # wait 65093 00:10:26.066 18:08:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:10:26.066 18:08:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:26.066 18:08:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.066 18:08:44 -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 18:08:44 -- nvmf/common.sh@469 -- # nvmfpid=65315 00:10:26.066 18:08:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:26.066 18:08:44 -- nvmf/common.sh@470 -- # waitforlisten 65315 00:10:26.066 18:08:44 -- common/autotest_common.sh@829 -- # '[' -z 65315 ']' 00:10:26.066 18:08:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.066 18:08:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.066 18:08:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.066 18:08:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.066 18:08:44 -- common/autotest_common.sh@10 -- # set +x 00:10:26.066 [2024-11-18 18:08:44.585281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:26.066 [2024-11-18 18:08:44.585385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.325 [2024-11-18 18:08:44.720244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.325 [2024-11-18 18:08:44.772847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.325 [2024-11-18 18:08:44.773004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.325 [2024-11-18 18:08:44.773016] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.325 [2024-11-18 18:08:44.773023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.325 [2024-11-18 18:08:44.773051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.260 18:08:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.260 18:08:45 -- common/autotest_common.sh@862 -- # return 0 00:10:27.260 18:08:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:27.260 18:08:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.260 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:10:27.260 18:08:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.260 18:08:45 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:27.260 18:08:45 -- common/autotest_common.sh@650 -- # local es=0 00:10:27.260 18:08:45 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:27.260 18:08:45 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:10:27.260 18:08:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.260 18:08:45 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:10:27.260 18:08:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.260 18:08:45 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:27.260 18:08:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:27.260 18:08:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:27.519 [2024-11-18 18:08:45.871310] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.519 18:08:45 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:27.778 18:08:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:28.036 [2024-11-18 18:08:46.403488] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:28.036 [2024-11-18 18:08:46.404002] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.036 18:08:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:28.036 malloc0 00:10:28.036 18:08:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:28.295 18:08:46 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.555 [2024-11-18 18:08:47.037180] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:10:28.555 [2024-11-18 18:08:47.037228] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:10:28.555 [2024-11-18 18:08:47.037260] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:10:28.555 request: 00:10:28.555 { 00:10:28.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.555 "host": "nqn.2016-06.io.spdk:host1", 00:10:28.555 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:28.555 "method": "nvmf_subsystem_add_host", 00:10:28.555 "req_id": 1 00:10:28.555 } 00:10:28.555 Got JSON-RPC error response 00:10:28.555 response: 00:10:28.555 { 00:10:28.555 "code": -32603, 00:10:28.555 "message": "Internal error" 00:10:28.555 } 00:10:28.555 18:08:47 -- common/autotest_common.sh@653 -- # es=1 00:10:28.555 18:08:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:28.555 18:08:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:28.555 18:08:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:28.555 18:08:47 -- target/tls.sh@189 -- # killprocess 65315 00:10:28.555 18:08:47 -- common/autotest_common.sh@936 -- # '[' -z 65315 ']' 00:10:28.555 18:08:47 -- common/autotest_common.sh@940 -- # kill -0 65315 00:10:28.555 18:08:47 -- common/autotest_common.sh@941 -- # uname 00:10:28.555 18:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.555 18:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65315 00:10:28.555 18:08:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:28.555 18:08:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:28.555 18:08:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65315' 00:10:28.555 killing process with pid 65315 00:10:28.555 18:08:47 -- common/autotest_common.sh@955 -- # kill 65315 00:10:28.555 18:08:47 -- common/autotest_common.sh@960 -- # wait 65315 00:10:28.814 18:08:47 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:28.814 18:08:47 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:10:28.814 18:08:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:28.814 18:08:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.814 18:08:47 -- common/autotest_common.sh@10 -- # set +x 00:10:28.814 18:08:47 -- nvmf/common.sh@469 -- # nvmfpid=65383 00:10:28.814 18:08:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:28.814 18:08:47 -- nvmf/common.sh@470 -- # waitforlisten 65383 00:10:28.814 18:08:47 -- common/autotest_common.sh@829 -- # '[' -z 65383 ']' 00:10:28.814 18:08:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.814 18:08:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.814 18:08:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.814 18:08:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.814 18:08:47 -- common/autotest_common.sh@10 -- # set +x 00:10:28.814 [2024-11-18 18:08:47.351623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.814 [2024-11-18 18:08:47.351725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.074 [2024-11-18 18:08:47.486708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.074 [2024-11-18 18:08:47.536298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.074 [2024-11-18 18:08:47.536447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.074 [2024-11-18 18:08:47.536459] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.074 [2024-11-18 18:08:47.536466] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.074 [2024-11-18 18:08:47.536495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.642 18:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.642 18:08:48 -- common/autotest_common.sh@862 -- # return 0 00:10:29.642 18:08:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:29.642 18:08:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.642 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:10:29.901 18:08:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.901 18:08:48 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:29.901 18:08:48 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:29.901 18:08:48 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:30.160 [2024-11-18 18:08:48.512342] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.160 18:08:48 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:30.160 18:08:48 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:30.419 [2024-11-18 18:08:48.932428] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:30.419 [2024-11-18 18:08:48.932694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.419 18:08:48 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:30.678 malloc0 00:10:30.678 18:08:49 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:30.936 18:08:49 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:31.195 18:08:49 -- target/tls.sh@197 -- # bdevperf_pid=65432 00:10:31.195 18:08:49 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:31.195 18:08:49 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:31.195 18:08:49 -- target/tls.sh@200 -- # waitforlisten 65432 /var/tmp/bdevperf.sock 00:10:31.195 18:08:49 -- common/autotest_common.sh@829 -- # '[' -z 65432 ']' 00:10:31.195 18:08:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:31.195 18:08:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.195 18:08:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:31.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:31.195 18:08:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.195 18:08:49 -- common/autotest_common.sh@10 -- # set +x 00:10:31.195 [2024-11-18 18:08:49.701965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:31.195 [2024-11-18 18:08:49.702970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65432 ] 00:10:31.454 [2024-11-18 18:08:49.845255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.454 [2024-11-18 18:08:49.913002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.021 18:08:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.021 18:08:50 -- common/autotest_common.sh@862 -- # return 0 00:10:32.021 18:08:50 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:32.280 [2024-11-18 18:08:50.858029] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:32.538 TLSTESTn1 00:10:32.538 18:08:50 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:32.809 18:08:51 -- target/tls.sh@205 -- # tgtconf='{ 00:10:32.809 "subsystems": [ 00:10:32.809 { 00:10:32.809 "subsystem": "iobuf", 00:10:32.809 "config": [ 00:10:32.809 { 00:10:32.809 "method": "iobuf_set_options", 00:10:32.809 "params": { 00:10:32.809 "small_pool_count": 8192, 00:10:32.809 "large_pool_count": 1024, 00:10:32.809 "small_bufsize": 8192, 00:10:32.809 "large_bufsize": 135168 00:10:32.809 } 00:10:32.809 } 00:10:32.809 ] 00:10:32.809 }, 00:10:32.809 { 00:10:32.809 "subsystem": "sock", 00:10:32.809 "config": [ 00:10:32.809 { 00:10:32.809 "method": "sock_impl_set_options", 00:10:32.809 "params": { 00:10:32.809 "impl_name": "uring", 00:10:32.809 "recv_buf_size": 2097152, 00:10:32.810 "send_buf_size": 2097152, 00:10:32.810 "enable_recv_pipe": true, 00:10:32.810 "enable_quickack": false, 00:10:32.810 "enable_placement_id": 0, 00:10:32.810 "enable_zerocopy_send_server": false, 00:10:32.810 "enable_zerocopy_send_client": false, 00:10:32.810 "zerocopy_threshold": 0, 00:10:32.810 "tls_version": 0, 00:10:32.810 "enable_ktls": false 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "sock_impl_set_options", 00:10:32.810 "params": { 00:10:32.810 "impl_name": "posix", 00:10:32.810 "recv_buf_size": 2097152, 00:10:32.810 "send_buf_size": 2097152, 00:10:32.810 "enable_recv_pipe": true, 00:10:32.810 "enable_quickack": false, 00:10:32.810 "enable_placement_id": 0, 00:10:32.810 "enable_zerocopy_send_server": true, 00:10:32.810 "enable_zerocopy_send_client": false, 00:10:32.810 "zerocopy_threshold": 0, 00:10:32.810 "tls_version": 0, 00:10:32.810 "enable_ktls": false 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "sock_impl_set_options", 00:10:32.810 "params": { 00:10:32.810 "impl_name": "ssl", 00:10:32.810 "recv_buf_size": 4096, 00:10:32.810 "send_buf_size": 4096, 00:10:32.810 "enable_recv_pipe": true, 00:10:32.810 "enable_quickack": false, 00:10:32.810 "enable_placement_id": 0, 00:10:32.810 "enable_zerocopy_send_server": true, 00:10:32.810 "enable_zerocopy_send_client": false, 00:10:32.810 "zerocopy_threshold": 0, 00:10:32.810 "tls_version": 0, 00:10:32.810 "enable_ktls": false 00:10:32.810 } 00:10:32.810 } 00:10:32.810 ] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "vmd", 00:10:32.810 "config": [] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "accel", 00:10:32.810 "config": [ 00:10:32.810 { 00:10:32.810 "method": "accel_set_options", 00:10:32.810 "params": { 00:10:32.810 "small_cache_size": 128, 00:10:32.810 "large_cache_size": 16, 00:10:32.810 "task_count": 2048, 00:10:32.810 "sequence_count": 2048, 00:10:32.810 "buf_count": 2048 00:10:32.810 } 00:10:32.810 } 00:10:32.810 ] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "bdev", 00:10:32.810 "config": [ 00:10:32.810 { 00:10:32.810 "method": "bdev_set_options", 00:10:32.810 "params": { 00:10:32.810 "bdev_io_pool_size": 65535, 00:10:32.810 "bdev_io_cache_size": 256, 00:10:32.810 "bdev_auto_examine": true, 00:10:32.810 "iobuf_small_cache_size": 128, 00:10:32.810 "iobuf_large_cache_size": 16 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_raid_set_options", 00:10:32.810 "params": { 00:10:32.810 "process_window_size_kb": 1024 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_iscsi_set_options", 00:10:32.810 "params": { 00:10:32.810 "timeout_sec": 30 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_nvme_set_options", 00:10:32.810 "params": { 00:10:32.810 "action_on_timeout": "none", 00:10:32.810 "timeout_us": 0, 00:10:32.810 "timeout_admin_us": 0, 00:10:32.810 "keep_alive_timeout_ms": 10000, 00:10:32.810 "transport_retry_count": 4, 00:10:32.810 "arbitration_burst": 0, 00:10:32.810 "low_priority_weight": 0, 00:10:32.810 "medium_priority_weight": 0, 00:10:32.810 "high_priority_weight": 0, 00:10:32.810 "nvme_adminq_poll_period_us": 10000, 00:10:32.810 "nvme_ioq_poll_period_us": 0, 00:10:32.810 "io_queue_requests": 0, 00:10:32.810 "delay_cmd_submit": true, 00:10:32.810 "bdev_retry_count": 3, 00:10:32.810 "transport_ack_timeout": 0, 00:10:32.810 "ctrlr_loss_timeout_sec": 0, 00:10:32.810 "reconnect_delay_sec": 0, 00:10:32.810 "fast_io_fail_timeout_sec": 0, 00:10:32.810 "generate_uuids": false, 00:10:32.810 "transport_tos": 0, 00:10:32.810 "io_path_stat": false, 00:10:32.810 "allow_accel_sequence": false 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_nvme_set_hotplug", 00:10:32.810 "params": { 00:10:32.810 "period_us": 100000, 00:10:32.810 "enable": false 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_malloc_create", 00:10:32.810 "params": { 00:10:32.810 "name": "malloc0", 00:10:32.810 "num_blocks": 8192, 00:10:32.810 "block_size": 4096, 00:10:32.810 "physical_block_size": 4096, 00:10:32.810 "uuid": "0a7453a0-9605-4a3a-a6ed-31d401c66547", 00:10:32.810 "optimal_io_boundary": 0 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "bdev_wait_for_examine" 00:10:32.810 } 00:10:32.810 ] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "nbd", 00:10:32.810 "config": [] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "scheduler", 00:10:32.810 "config": [ 00:10:32.810 { 00:10:32.810 "method": "framework_set_scheduler", 00:10:32.810 "params": { 00:10:32.810 "name": "static" 00:10:32.810 } 00:10:32.810 } 00:10:32.810 ] 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "subsystem": "nvmf", 00:10:32.810 "config": [ 00:10:32.810 { 00:10:32.810 "method": "nvmf_set_config", 00:10:32.810 "params": { 00:10:32.810 "discovery_filter": "match_any", 00:10:32.810 "admin_cmd_passthru": { 00:10:32.810 "identify_ctrlr": false 00:10:32.810 } 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_set_max_subsystems", 00:10:32.810 "params": { 00:10:32.810 "max_subsystems": 1024 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_set_crdt", 00:10:32.810 "params": { 00:10:32.810 "crdt1": 0, 00:10:32.810 "crdt2": 0, 00:10:32.810 "crdt3": 0 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_create_transport", 00:10:32.810 "params": { 00:10:32.810 "trtype": "TCP", 00:10:32.810 "max_queue_depth": 128, 00:10:32.810 "max_io_qpairs_per_ctrlr": 127, 00:10:32.810 "in_capsule_data_size": 4096, 00:10:32.810 "max_io_size": 131072, 00:10:32.810 "io_unit_size": 131072, 00:10:32.810 "max_aq_depth": 128, 00:10:32.810 "num_shared_buffers": 511, 00:10:32.810 "buf_cache_size": 4294967295, 00:10:32.810 "dif_insert_or_strip": false, 00:10:32.810 "zcopy": false, 00:10:32.810 "c2h_success": false, 00:10:32.810 "sock_priority": 0, 00:10:32.810 "abort_timeout_sec": 1 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_create_subsystem", 00:10:32.810 "params": { 00:10:32.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.810 "allow_any_host": false, 00:10:32.810 "serial_number": "SPDK00000000000001", 00:10:32.810 "model_number": "SPDK bdev Controller", 00:10:32.810 "max_namespaces": 10, 00:10:32.810 "min_cntlid": 1, 00:10:32.810 "max_cntlid": 65519, 00:10:32.810 "ana_reporting": false 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_subsystem_add_host", 00:10:32.810 "params": { 00:10:32.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.810 "host": "nqn.2016-06.io.spdk:host1", 00:10:32.810 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:32.810 } 00:10:32.810 }, 00:10:32.810 { 00:10:32.810 "method": "nvmf_subsystem_add_ns", 00:10:32.810 "params": { 00:10:32.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.811 "namespace": { 00:10:32.811 "nsid": 1, 00:10:32.811 "bdev_name": "malloc0", 00:10:32.811 "nguid": "0A7453A096054A3AA6ED31D401C66547", 00:10:32.811 "uuid": "0a7453a0-9605-4a3a-a6ed-31d401c66547" 00:10:32.811 } 00:10:32.811 } 00:10:32.811 }, 00:10:32.811 { 00:10:32.811 "method": "nvmf_subsystem_add_listener", 00:10:32.811 "params": { 00:10:32.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.811 "listen_address": { 00:10:32.811 "trtype": "TCP", 00:10:32.811 "adrfam": "IPv4", 00:10:32.811 "traddr": "10.0.0.2", 00:10:32.811 "trsvcid": "4420" 00:10:32.811 }, 00:10:32.811 "secure_channel": true 00:10:32.811 } 00:10:32.811 } 00:10:32.811 ] 00:10:32.811 } 00:10:32.811 ] 00:10:32.811 }' 00:10:32.811 18:08:51 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:10:33.082 18:08:51 -- target/tls.sh@206 -- # bdevperfconf='{ 00:10:33.082 "subsystems": [ 00:10:33.082 { 00:10:33.082 "subsystem": "iobuf", 00:10:33.082 "config": [ 00:10:33.082 { 00:10:33.082 "method": "iobuf_set_options", 00:10:33.082 "params": { 00:10:33.082 "small_pool_count": 8192, 00:10:33.082 "large_pool_count": 1024, 00:10:33.082 "small_bufsize": 8192, 00:10:33.082 "large_bufsize": 135168 00:10:33.082 } 00:10:33.082 } 00:10:33.082 ] 00:10:33.082 }, 00:10:33.082 { 00:10:33.082 "subsystem": "sock", 00:10:33.082 "config": [ 00:10:33.082 { 00:10:33.082 "method": "sock_impl_set_options", 00:10:33.082 "params": { 00:10:33.082 "impl_name": "uring", 00:10:33.082 "recv_buf_size": 2097152, 00:10:33.082 "send_buf_size": 2097152, 00:10:33.082 "enable_recv_pipe": true, 00:10:33.082 "enable_quickack": false, 00:10:33.082 "enable_placement_id": 0, 00:10:33.082 "enable_zerocopy_send_server": false, 00:10:33.082 "enable_zerocopy_send_client": false, 00:10:33.082 "zerocopy_threshold": 0, 00:10:33.082 "tls_version": 0, 00:10:33.082 "enable_ktls": false 00:10:33.082 } 00:10:33.082 }, 00:10:33.082 { 00:10:33.082 "method": "sock_impl_set_options", 00:10:33.082 "params": { 00:10:33.082 "impl_name": "posix", 00:10:33.082 "recv_buf_size": 2097152, 00:10:33.082 "send_buf_size": 2097152, 00:10:33.082 "enable_recv_pipe": true, 00:10:33.082 "enable_quickack": false, 00:10:33.082 "enable_placement_id": 0, 00:10:33.082 "enable_zerocopy_send_server": true, 00:10:33.082 "enable_zerocopy_send_client": false, 00:10:33.082 "zerocopy_threshold": 0, 00:10:33.082 "tls_version": 0, 00:10:33.082 "enable_ktls": false 00:10:33.082 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "sock_impl_set_options", 00:10:33.083 "params": { 00:10:33.083 "impl_name": "ssl", 00:10:33.083 "recv_buf_size": 4096, 00:10:33.083 "send_buf_size": 4096, 00:10:33.083 "enable_recv_pipe": true, 00:10:33.083 "enable_quickack": false, 00:10:33.083 "enable_placement_id": 0, 00:10:33.083 "enable_zerocopy_send_server": true, 00:10:33.083 "enable_zerocopy_send_client": false, 00:10:33.083 "zerocopy_threshold": 0, 00:10:33.083 "tls_version": 0, 00:10:33.083 "enable_ktls": false 00:10:33.083 } 00:10:33.083 } 00:10:33.083 ] 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "subsystem": "vmd", 00:10:33.083 "config": [] 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "subsystem": "accel", 00:10:33.083 "config": [ 00:10:33.083 { 00:10:33.083 "method": "accel_set_options", 00:10:33.083 "params": { 00:10:33.083 "small_cache_size": 128, 00:10:33.083 "large_cache_size": 16, 00:10:33.083 "task_count": 2048, 00:10:33.083 "sequence_count": 2048, 00:10:33.083 "buf_count": 2048 00:10:33.083 } 00:10:33.083 } 00:10:33.083 ] 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "subsystem": "bdev", 00:10:33.083 "config": [ 00:10:33.083 { 00:10:33.083 "method": "bdev_set_options", 00:10:33.083 "params": { 00:10:33.083 "bdev_io_pool_size": 65535, 00:10:33.083 "bdev_io_cache_size": 256, 00:10:33.083 "bdev_auto_examine": true, 00:10:33.083 "iobuf_small_cache_size": 128, 00:10:33.083 "iobuf_large_cache_size": 16 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_raid_set_options", 00:10:33.083 "params": { 00:10:33.083 "process_window_size_kb": 1024 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_iscsi_set_options", 00:10:33.083 "params": { 00:10:33.083 "timeout_sec": 30 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_nvme_set_options", 00:10:33.083 "params": { 00:10:33.083 "action_on_timeout": "none", 00:10:33.083 "timeout_us": 0, 00:10:33.083 "timeout_admin_us": 0, 00:10:33.083 "keep_alive_timeout_ms": 10000, 00:10:33.083 "transport_retry_count": 4, 00:10:33.083 "arbitration_burst": 0, 00:10:33.083 "low_priority_weight": 0, 00:10:33.083 "medium_priority_weight": 0, 00:10:33.083 "high_priority_weight": 0, 00:10:33.083 "nvme_adminq_poll_period_us": 10000, 00:10:33.083 "nvme_ioq_poll_period_us": 0, 00:10:33.083 "io_queue_requests": 512, 00:10:33.083 "delay_cmd_submit": true, 00:10:33.083 "bdev_retry_count": 3, 00:10:33.083 "transport_ack_timeout": 0, 00:10:33.083 "ctrlr_loss_timeout_sec": 0, 00:10:33.083 "reconnect_delay_sec": 0, 00:10:33.083 "fast_io_fail_timeout_sec": 0, 00:10:33.083 "generate_uuids": false, 00:10:33.083 "transport_tos": 0, 00:10:33.083 "io_path_stat": false, 00:10:33.083 "allow_accel_sequence": false 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_nvme_attach_controller", 00:10:33.083 "params": { 00:10:33.083 "name": "TLSTEST", 00:10:33.083 "trtype": "TCP", 00:10:33.083 "adrfam": "IPv4", 00:10:33.083 "traddr": "10.0.0.2", 00:10:33.083 "trsvcid": "4420", 00:10:33.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.083 "prchk_reftag": false, 00:10:33.083 "prchk_guard": false, 00:10:33.083 "ctrlr_loss_timeout_sec": 0, 00:10:33.083 "reconnect_delay_sec": 0, 00:10:33.083 "fast_io_fail_timeout_sec": 0, 00:10:33.083 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:33.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.083 "hdgst": false, 00:10:33.083 "ddgst": false 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_nvme_set_hotplug", 00:10:33.083 "params": { 00:10:33.083 "period_us": 100000, 00:10:33.083 "enable": false 00:10:33.083 } 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "method": "bdev_wait_for_examine" 00:10:33.083 } 00:10:33.083 ] 00:10:33.083 }, 00:10:33.083 { 00:10:33.083 "subsystem": "nbd", 00:10:33.083 "config": [] 00:10:33.083 } 00:10:33.083 ] 00:10:33.083 }' 00:10:33.083 18:08:51 -- target/tls.sh@208 -- # killprocess 65432 00:10:33.083 18:08:51 -- common/autotest_common.sh@936 -- # '[' -z 65432 ']' 00:10:33.083 18:08:51 -- common/autotest_common.sh@940 -- # kill -0 65432 00:10:33.083 18:08:51 -- common/autotest_common.sh@941 -- # uname 00:10:33.083 18:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:33.083 18:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65432 00:10:33.083 killing process with pid 65432 00:10:33.083 Received shutdown signal, test time was about 10.000000 seconds 00:10:33.083 00:10:33.083 Latency(us) 00:10:33.083 [2024-11-18T18:08:51.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.083 [2024-11-18T18:08:51.687Z] =================================================================================================================== 00:10:33.083 [2024-11-18T18:08:51.687Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:33.083 18:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:33.083 18:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:33.083 18:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65432' 00:10:33.083 18:08:51 -- common/autotest_common.sh@955 -- # kill 65432 00:10:33.083 18:08:51 -- common/autotest_common.sh@960 -- # wait 65432 00:10:33.342 18:08:51 -- target/tls.sh@209 -- # killprocess 65383 00:10:33.342 18:08:51 -- common/autotest_common.sh@936 -- # '[' -z 65383 ']' 00:10:33.342 18:08:51 -- common/autotest_common.sh@940 -- # kill -0 65383 00:10:33.342 18:08:51 -- common/autotest_common.sh@941 -- # uname 00:10:33.342 18:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:33.342 18:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65383 00:10:33.342 killing process with pid 65383 00:10:33.342 18:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:33.342 18:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:33.342 18:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65383' 00:10:33.342 18:08:51 -- common/autotest_common.sh@955 -- # kill 65383 00:10:33.342 18:08:51 -- common/autotest_common.sh@960 -- # wait 65383 00:10:33.602 18:08:52 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:10:33.602 18:08:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:33.602 18:08:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.602 18:08:52 -- target/tls.sh@212 -- # echo '{ 00:10:33.602 "subsystems": [ 00:10:33.602 { 00:10:33.602 "subsystem": "iobuf", 00:10:33.602 "config": [ 00:10:33.602 { 00:10:33.602 "method": "iobuf_set_options", 00:10:33.602 "params": { 00:10:33.602 "small_pool_count": 8192, 00:10:33.602 "large_pool_count": 1024, 00:10:33.602 "small_bufsize": 8192, 00:10:33.602 "large_bufsize": 135168 00:10:33.602 } 00:10:33.602 } 00:10:33.602 ] 00:10:33.602 }, 00:10:33.602 { 00:10:33.602 "subsystem": "sock", 00:10:33.602 "config": [ 00:10:33.602 { 00:10:33.602 "method": "sock_impl_set_options", 00:10:33.602 "params": { 00:10:33.602 "impl_name": "uring", 00:10:33.602 "recv_buf_size": 2097152, 00:10:33.602 "send_buf_size": 2097152, 00:10:33.602 "enable_recv_pipe": true, 00:10:33.602 "enable_quickack": false, 00:10:33.602 "enable_placement_id": 0, 00:10:33.602 "enable_zerocopy_send_server": false, 00:10:33.602 "enable_zerocopy_send_client": false, 00:10:33.602 "zerocopy_threshold": 0, 00:10:33.602 "tls_version": 0, 00:10:33.602 "enable_ktls": false 00:10:33.602 } 00:10:33.602 }, 00:10:33.602 { 00:10:33.602 "method": "sock_impl_set_options", 00:10:33.602 "params": { 00:10:33.602 "impl_name": "posix", 00:10:33.603 "recv_buf_size": 2097152, 00:10:33.603 "send_buf_size": 2097152, 00:10:33.603 "enable_recv_pipe": true, 00:10:33.603 "enable_quickack": false, 00:10:33.603 "enable_placement_id": 0, 00:10:33.603 "enable_zerocopy_send_server": true, 00:10:33.603 "enable_zerocopy_send_client": false, 00:10:33.603 "zerocopy_threshold": 0, 00:10:33.603 "tls_version": 0, 00:10:33.603 "enable_ktls": false 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "sock_impl_set_options", 00:10:33.603 "params": { 00:10:33.603 "impl_name": "ssl", 00:10:33.603 "recv_buf_size": 4096, 00:10:33.603 "send_buf_size": 4096, 00:10:33.603 "enable_recv_pipe": true, 00:10:33.603 "enable_quickack": false, 00:10:33.603 "enable_placement_id": 0, 00:10:33.603 "enable_zerocopy_send_server": true, 00:10:33.603 "enable_zerocopy_send_client": false, 00:10:33.603 "zerocopy_threshold": 0, 00:10:33.603 "tls_version": 0, 00:10:33.603 "enable_ktls": false 00:10:33.603 } 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "vmd", 00:10:33.603 "config": [] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "accel", 00:10:33.603 "config": [ 00:10:33.603 { 00:10:33.603 "method": "accel_set_options", 00:10:33.603 "params": { 00:10:33.603 "small_cache_size": 128, 00:10:33.603 "large_cache_size": 16, 00:10:33.603 "task_count": 2048, 00:10:33.603 "sequence_count": 2048, 00:10:33.603 "buf_count": 2048 00:10:33.603 } 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "bdev", 00:10:33.603 "config": [ 00:10:33.603 { 00:10:33.603 "method": "bdev_set_options", 00:10:33.603 "params": { 00:10:33.603 "bdev_io_pool_size": 65535, 00:10:33.603 "bdev_io_cache_size": 256, 00:10:33.603 "bdev_auto_examine": true, 00:10:33.603 "iobuf_small_cache_size": 128, 00:10:33.603 "iobuf_large_cache_size": 16 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_raid_set_options", 00:10:33.603 "params": { 00:10:33.603 "process_window_size_kb": 1024 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_iscsi_set_options", 00:10:33.603 "params": { 00:10:33.603 "timeout_sec": 30 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_nvme_set_options", 00:10:33.603 "params": { 00:10:33.603 "action_on_timeout": "none", 00:10:33.603 "timeout_us": 0, 00:10:33.603 "timeout_admin_us": 0, 00:10:33.603 "keep_alive_timeout_ms": 10000, 00:10:33.603 "transport_retry_count": 4, 00:10:33.603 "arbitration_burst": 0, 00:10:33.603 "low_priority_weight": 0, 00:10:33.603 "medium_priority_weight": 0, 00:10:33.603 "high_priority_weight": 0, 00:10:33.603 "nvme_adminq_poll_period_us": 10000, 00:10:33.603 "nvme_ioq_poll_period_us": 0, 00:10:33.603 "io_queue_requests": 0, 00:10:33.603 "delay_cmd_submit": true, 00:10:33.603 "bdev_retry_count": 3, 00:10:33.603 "transport_ack_timeout": 0, 00:10:33.603 "ctrlr_loss_timeout_sec": 0, 00:10:33.603 "reconnect_delay_sec": 0, 00:10:33.603 "fast_io_fail_timeout_sec": 0, 00:10:33.603 "generate_uuids": false, 00:10:33.603 "transport_tos": 0, 00:10:33.603 "io_path_stat": false, 00:10:33.603 "allow_accel_sequence": false 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_nvme_set_hotplug", 00:10:33.603 "params": { 00:10:33.603 "period_us": 100000, 00:10:33.603 "enable": false 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_malloc_create", 00:10:33.603 "params": { 00:10:33.603 "name": "malloc0", 00:10:33.603 "num_blocks": 8192, 00:10:33.603 "block_size": 4096, 00:10:33.603 "physical_block_size": 4096, 00:10:33.603 "uuid": "0a7453a0-9605-4a3a-a6ed-31d401c66547", 00:10:33.603 "optimal_io_boundary": 0 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "bdev_wait_for_examine" 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "nbd", 00:10:33.603 "config": [] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "scheduler", 00:10:33.603 "config": [ 00:10:33.603 { 00:10:33.603 "method": "framework_set_scheduler", 00:10:33.603 "params": { 00:10:33.603 "name": "static" 00:10:33.603 } 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "subsystem": "nvmf", 00:10:33.603 "config": [ 00:10:33.603 { 00:10:33.603 "method": "nvmf_set_config", 00:10:33.603 "params": { 00:10:33.603 "discovery_filter": "match_any", 00:10:33.603 "admin_cmd_passthru": { 00:10:33.603 "identify_ctrlr": false 00:10:33.603 } 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_set_max_subsystems", 00:10:33.603 "params": { 00:10:33.603 "max_subsystems": 1024 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_set_crdt", 00:10:33.603 "params": { 00:10:33.603 "crdt1": 0, 00:10:33.603 "crdt2": 0, 00:10:33.603 "crdt3": 0 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_create_transport", 00:10:33.603 "params": { 00:10:33.603 "trtype": "TCP", 00:10:33.603 "max_queue_depth": 128, 00:10:33.603 "max_io_qpairs_per_ctrlr": 127, 00:10:33.603 "in_capsule_data_size": 4096, 00:10:33.603 "max_io_size": 131072, 00:10:33.603 "io_unit_size": 131072, 00:10:33.603 "max_aq_depth": 128, 00:10:33.603 "num_shared_buffers": 511, 00:10:33.603 "buf_cache_size": 4294967295, 00:10:33.603 "dif_insert_or_strip": false, 00:10:33.603 "zcopy": false, 00:10:33.603 "c2h_success": false, 00:10:33.603 "sock_priority": 0, 00:10:33.603 "abort_timeout_sec": 1 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_create_subsystem", 00:10:33.603 "params": { 00:10:33.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.603 "allow_any_host": false, 00:10:33.603 "serial_number": "SPDK00000000000001", 00:10:33.603 "model_number": "SPDK bdev Controller", 00:10:33.603 "max_namespaces": 10, 00:10:33.603 "min_cntlid": 1, 00:10:33.603 "max_cntlid": 65519, 00:10:33.603 "ana_reporting": false 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_subsystem_add_host", 00:10:33.603 "params": { 00:10:33.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.603 "host": "nqn.2016-06.io.spdk:host1", 00:10:33.603 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_subsystem_add_ns", 00:10:33.603 "params": { 00:10:33.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.603 "namespace": { 00:10:33.603 "nsid": 1, 00:10:33.603 "bdev_name": "malloc0", 00:10:33.603 "nguid": "0A7453A096054A3AA6ED31D401C66547", 00:10:33.603 "uuid": "0a7453a0-9605-4a3a-a6ed-31d401c66547" 00:10:33.603 } 00:10:33.603 } 00:10:33.603 }, 00:10:33.603 { 00:10:33.603 "method": "nvmf_subsystem_add_listener", 00:10:33.603 "params": { 00:10:33.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.603 "listen_address": { 00:10:33.603 "trtype": "TCP", 00:10:33.603 "adrfam": "IPv4", 00:10:33.603 "traddr": "10.0.0.2", 00:10:33.603 "trsvcid": "4420" 00:10:33.603 }, 00:10:33.603 "secure_channel": true 00:10:33.603 } 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 } 00:10:33.603 ] 00:10:33.603 }' 00:10:33.603 18:08:52 -- common/autotest_common.sh@10 -- # set +x 00:10:33.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.603 18:08:52 -- nvmf/common.sh@469 -- # nvmfpid=65475 00:10:33.604 18:08:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:10:33.604 18:08:52 -- nvmf/common.sh@470 -- # waitforlisten 65475 00:10:33.604 18:08:52 -- common/autotest_common.sh@829 -- # '[' -z 65475 ']' 00:10:33.604 18:08:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.604 18:08:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.604 18:08:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.604 18:08:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.604 18:08:52 -- common/autotest_common.sh@10 -- # set +x 00:10:33.604 [2024-11-18 18:08:52.095733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:33.604 [2024-11-18 18:08:52.096351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.862 [2024-11-18 18:08:52.237645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.862 [2024-11-18 18:08:52.288943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:33.862 [2024-11-18 18:08:52.289328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.862 [2024-11-18 18:08:52.289349] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.862 [2024-11-18 18:08:52.289358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.863 [2024-11-18 18:08:52.289391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.121 [2024-11-18 18:08:52.471911] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.121 [2024-11-18 18:08:52.503868] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:34.121 [2024-11-18 18:08:52.504082] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.689 18:08:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.689 18:08:53 -- common/autotest_common.sh@862 -- # return 0 00:10:34.689 18:08:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:34.689 18:08:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.689 18:08:53 -- common/autotest_common.sh@10 -- # set +x 00:10:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:34.689 18:08:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.689 18:08:53 -- target/tls.sh@216 -- # bdevperf_pid=65507 00:10:34.689 18:08:53 -- target/tls.sh@217 -- # waitforlisten 65507 /var/tmp/bdevperf.sock 00:10:34.689 18:08:53 -- common/autotest_common.sh@829 -- # '[' -z 65507 ']' 00:10:34.689 18:08:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:34.689 18:08:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.689 18:08:53 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:10:34.689 18:08:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:34.689 18:08:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.689 18:08:53 -- target/tls.sh@213 -- # echo '{ 00:10:34.689 "subsystems": [ 00:10:34.689 { 00:10:34.689 "subsystem": "iobuf", 00:10:34.689 "config": [ 00:10:34.689 { 00:10:34.689 "method": "iobuf_set_options", 00:10:34.689 "params": { 00:10:34.689 "small_pool_count": 8192, 00:10:34.689 "large_pool_count": 1024, 00:10:34.689 "small_bufsize": 8192, 00:10:34.689 "large_bufsize": 135168 00:10:34.689 } 00:10:34.689 } 00:10:34.689 ] 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "subsystem": "sock", 00:10:34.689 "config": [ 00:10:34.689 { 00:10:34.689 "method": "sock_impl_set_options", 00:10:34.689 "params": { 00:10:34.689 "impl_name": "uring", 00:10:34.689 "recv_buf_size": 2097152, 00:10:34.689 "send_buf_size": 2097152, 00:10:34.689 "enable_recv_pipe": true, 00:10:34.689 "enable_quickack": false, 00:10:34.689 "enable_placement_id": 0, 00:10:34.689 "enable_zerocopy_send_server": false, 00:10:34.689 "enable_zerocopy_send_client": false, 00:10:34.689 "zerocopy_threshold": 0, 00:10:34.689 "tls_version": 0, 00:10:34.689 "enable_ktls": false 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "sock_impl_set_options", 00:10:34.689 "params": { 00:10:34.689 "impl_name": "posix", 00:10:34.689 "recv_buf_size": 2097152, 00:10:34.689 "send_buf_size": 2097152, 00:10:34.689 "enable_recv_pipe": true, 00:10:34.689 "enable_quickack": false, 00:10:34.689 "enable_placement_id": 0, 00:10:34.689 "enable_zerocopy_send_server": true, 00:10:34.689 "enable_zerocopy_send_client": false, 00:10:34.689 "zerocopy_threshold": 0, 00:10:34.689 "tls_version": 0, 00:10:34.689 "enable_ktls": false 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "sock_impl_set_options", 00:10:34.689 "params": { 00:10:34.689 "impl_name": "ssl", 00:10:34.689 "recv_buf_size": 4096, 00:10:34.689 "send_buf_size": 4096, 00:10:34.689 "enable_recv_pipe": true, 00:10:34.689 "enable_quickack": false, 00:10:34.689 "enable_placement_id": 0, 00:10:34.689 "enable_zerocopy_send_server": true, 00:10:34.689 "enable_zerocopy_send_client": false, 00:10:34.689 "zerocopy_threshold": 0, 00:10:34.689 "tls_version": 0, 00:10:34.689 "enable_ktls": false 00:10:34.689 } 00:10:34.689 } 00:10:34.689 ] 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "subsystem": "vmd", 00:10:34.689 "config": [] 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "subsystem": "accel", 00:10:34.689 "config": [ 00:10:34.689 { 00:10:34.689 "method": "accel_set_options", 00:10:34.689 "params": { 00:10:34.689 "small_cache_size": 128, 00:10:34.689 "large_cache_size": 16, 00:10:34.689 "task_count": 2048, 00:10:34.689 "sequence_count": 2048, 00:10:34.689 "buf_count": 2048 00:10:34.689 } 00:10:34.689 } 00:10:34.689 ] 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "subsystem": "bdev", 00:10:34.689 "config": [ 00:10:34.689 { 00:10:34.689 "method": "bdev_set_options", 00:10:34.689 "params": { 00:10:34.689 "bdev_io_pool_size": 65535, 00:10:34.689 "bdev_io_cache_size": 256, 00:10:34.689 "bdev_auto_examine": true, 00:10:34.689 "iobuf_small_cache_size": 128, 00:10:34.689 "iobuf_large_cache_size": 16 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "bdev_raid_set_options", 00:10:34.689 "params": { 00:10:34.689 "process_window_size_kb": 1024 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "bdev_iscsi_set_options", 00:10:34.689 "params": { 00:10:34.689 "timeout_sec": 30 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "bdev_nvme_set_options", 00:10:34.689 "params": { 00:10:34.689 "action_on_timeout": "none", 00:10:34.689 "timeout_us": 0, 00:10:34.689 "timeout_admin_us": 0, 00:10:34.689 "keep_alive_timeout_ms": 10000, 00:10:34.689 "transport_retry_count": 4, 00:10:34.689 "arbitration_burst": 0, 00:10:34.689 "low_priority_weight": 0, 00:10:34.689 "medium_priority_weight": 0, 00:10:34.689 "high_priority_weight": 0, 00:10:34.689 "nvme_adminq_poll_period_us": 10000, 00:10:34.689 "nvme_ioq_poll_period_us": 0, 00:10:34.689 "io_queue_requests": 512, 00:10:34.689 "delay_cmd_submit": true, 00:10:34.689 "bdev_retry_count": 3, 00:10:34.689 "transport_ack_timeout": 0, 00:10:34.689 "ctrlr_loss_timeout_sec": 0, 00:10:34.689 "reconnect_delay_sec": 0, 00:10:34.689 "fast_io_fail_timeout_sec": 0, 00:10:34.689 "generate_uuids": false, 00:10:34.689 "transport_tos": 0, 00:10:34.689 "io_path_stat": false, 00:10:34.689 "allow_accel_sequence": false 00:10:34.689 } 00:10:34.689 }, 00:10:34.689 { 00:10:34.689 "method": "bdev_nvme_attach_controller", 00:10:34.689 "params": { 00:10:34.689 "name": "TLSTEST", 00:10:34.689 "trty 18:08:53 -- common/autotest_common.sh@10 -- # set +x 00:10:34.689 pe": "TCP", 00:10:34.689 "adrfam": "IPv4", 00:10:34.690 "traddr": "10.0.0.2", 00:10:34.690 "trsvcid": "4420", 00:10:34.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:34.690 "prchk_reftag": false, 00:10:34.690 "prchk_guard": false, 00:10:34.690 "ctrlr_loss_timeout_sec": 0, 00:10:34.690 "reconnect_delay_sec": 0, 00:10:34.690 "fast_io_fail_timeout_sec": 0, 00:10:34.690 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:10:34.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:34.690 "hdgst": false, 00:10:34.690 "ddgst": false 00:10:34.690 } 00:10:34.690 }, 00:10:34.690 { 00:10:34.690 "method": "bdev_nvme_set_hotplug", 00:10:34.690 "params": { 00:10:34.690 "period_us": 100000, 00:10:34.690 "enable": false 00:10:34.690 } 00:10:34.690 }, 00:10:34.690 { 00:10:34.690 "method": "bdev_wait_for_examine" 00:10:34.690 } 00:10:34.690 ] 00:10:34.690 }, 00:10:34.690 { 00:10:34.690 "subsystem": "nbd", 00:10:34.690 "config": [] 00:10:34.690 } 00:10:34.690 ] 00:10:34.690 }' 00:10:34.690 [2024-11-18 18:08:53.153151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:34.690 [2024-11-18 18:08:53.153387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65507 ] 00:10:34.690 [2024-11-18 18:08:53.289611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.948 [2024-11-18 18:08:53.358680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.948 [2024-11-18 18:08:53.482959] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:35.884 18:08:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.884 18:08:54 -- common/autotest_common.sh@862 -- # return 0 00:10:35.884 18:08:54 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:35.884 Running I/O for 10 seconds... 00:10:45.861 00:10:45.861 Latency(us) 00:10:45.861 [2024-11-18T18:09:04.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.861 [2024-11-18T18:09:04.465Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:45.861 Verification LBA range: start 0x0 length 0x2000 00:10:45.861 TLSTESTn1 : 10.01 6248.56 24.41 0.00 0.00 20453.74 4110.89 21328.99 00:10:45.861 [2024-11-18T18:09:04.465Z] =================================================================================================================== 00:10:45.861 [2024-11-18T18:09:04.465Z] Total : 6248.56 24.41 0.00 0.00 20453.74 4110.89 21328.99 00:10:45.861 0 00:10:45.861 18:09:04 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:45.861 18:09:04 -- target/tls.sh@223 -- # killprocess 65507 00:10:45.861 18:09:04 -- common/autotest_common.sh@936 -- # '[' -z 65507 ']' 00:10:45.861 18:09:04 -- common/autotest_common.sh@940 -- # kill -0 65507 00:10:45.861 18:09:04 -- common/autotest_common.sh@941 -- # uname 00:10:45.861 18:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:45.861 18:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65507 00:10:45.861 killing process with pid 65507 00:10:45.861 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.862 00:10:45.862 Latency(us) 00:10:45.862 [2024-11-18T18:09:04.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.862 [2024-11-18T18:09:04.466Z] =================================================================================================================== 00:10:45.862 [2024-11-18T18:09:04.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.862 18:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:45.862 18:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:45.862 18:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65507' 00:10:45.862 18:09:04 -- common/autotest_common.sh@955 -- # kill 65507 00:10:45.862 18:09:04 -- common/autotest_common.sh@960 -- # wait 65507 00:10:46.121 18:09:04 -- target/tls.sh@224 -- # killprocess 65475 00:10:46.121 18:09:04 -- common/autotest_common.sh@936 -- # '[' -z 65475 ']' 00:10:46.121 18:09:04 -- common/autotest_common.sh@940 -- # kill -0 65475 00:10:46.121 18:09:04 -- common/autotest_common.sh@941 -- # uname 00:10:46.121 18:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.121 18:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65475 00:10:46.121 killing process with pid 65475 00:10:46.121 18:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:46.121 18:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:46.121 18:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65475' 00:10:46.121 18:09:04 -- common/autotest_common.sh@955 -- # kill 65475 00:10:46.121 18:09:04 -- common/autotest_common.sh@960 -- # wait 65475 00:10:46.380 18:09:04 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:10:46.380 18:09:04 -- target/tls.sh@227 -- # cleanup 00:10:46.380 18:09:04 -- target/tls.sh@15 -- # process_shm --id 0 00:10:46.380 18:09:04 -- common/autotest_common.sh@806 -- # type=--id 00:10:46.380 18:09:04 -- common/autotest_common.sh@807 -- # id=0 00:10:46.380 18:09:04 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:46.381 18:09:04 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:46.381 18:09:04 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:46.381 18:09:04 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:46.381 18:09:04 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:46.381 18:09:04 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:46.381 nvmf_trace.0 00:10:46.381 18:09:04 -- common/autotest_common.sh@821 -- # return 0 00:10:46.381 18:09:04 -- target/tls.sh@16 -- # killprocess 65507 00:10:46.381 18:09:04 -- common/autotest_common.sh@936 -- # '[' -z 65507 ']' 00:10:46.381 18:09:04 -- common/autotest_common.sh@940 -- # kill -0 65507 00:10:46.381 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65507) - No such process 00:10:46.381 Process with pid 65507 is not found 00:10:46.381 18:09:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65507 is not found' 00:10:46.381 18:09:04 -- target/tls.sh@17 -- # nvmftestfini 00:10:46.381 18:09:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:46.381 18:09:04 -- nvmf/common.sh@116 -- # sync 00:10:46.381 18:09:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:46.381 18:09:04 -- nvmf/common.sh@119 -- # set +e 00:10:46.381 18:09:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:46.381 18:09:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:46.381 rmmod nvme_tcp 00:10:46.381 rmmod nvme_fabrics 00:10:46.381 rmmod nvme_keyring 00:10:46.381 18:09:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:46.381 Process with pid 65475 is not found 00:10:46.381 18:09:04 -- nvmf/common.sh@123 -- # set -e 00:10:46.381 18:09:04 -- nvmf/common.sh@124 -- # return 0 00:10:46.381 18:09:04 -- nvmf/common.sh@477 -- # '[' -n 65475 ']' 00:10:46.381 18:09:04 -- nvmf/common.sh@478 -- # killprocess 65475 00:10:46.381 18:09:04 -- common/autotest_common.sh@936 -- # '[' -z 65475 ']' 00:10:46.381 18:09:04 -- common/autotest_common.sh@940 -- # kill -0 65475 00:10:46.381 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65475) - No such process 00:10:46.381 18:09:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65475 is not found' 00:10:46.381 18:09:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:46.381 18:09:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:46.381 18:09:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:46.381 18:09:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.381 18:09:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:46.381 18:09:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.381 18:09:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.381 18:09:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.381 18:09:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:46.381 18:09:04 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:46.381 ************************************ 00:10:46.381 END TEST nvmf_tls 00:10:46.381 ************************************ 00:10:46.381 00:10:46.381 real 1m10.496s 00:10:46.381 user 1m50.664s 00:10:46.381 sys 0m23.020s 00:10:46.381 18:09:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:46.381 18:09:04 -- common/autotest_common.sh@10 -- # set +x 00:10:46.381 18:09:04 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:46.381 18:09:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:46.381 18:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.381 18:09:04 -- common/autotest_common.sh@10 -- # set +x 00:10:46.641 ************************************ 00:10:46.641 START TEST nvmf_fips 00:10:46.641 ************************************ 00:10:46.641 18:09:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:10:46.641 * Looking for test storage... 00:10:46.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:10:46.641 18:09:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:46.641 18:09:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:46.641 18:09:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:46.641 18:09:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:46.641 18:09:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:46.641 18:09:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:46.641 18:09:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:46.641 18:09:05 -- scripts/common.sh@335 -- # IFS=.-: 00:10:46.641 18:09:05 -- scripts/common.sh@335 -- # read -ra ver1 00:10:46.641 18:09:05 -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.641 18:09:05 -- scripts/common.sh@336 -- # read -ra ver2 00:10:46.641 18:09:05 -- scripts/common.sh@337 -- # local 'op=<' 00:10:46.641 18:09:05 -- scripts/common.sh@339 -- # ver1_l=2 00:10:46.641 18:09:05 -- scripts/common.sh@340 -- # ver2_l=1 00:10:46.641 18:09:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:46.641 18:09:05 -- scripts/common.sh@343 -- # case "$op" in 00:10:46.641 18:09:05 -- scripts/common.sh@344 -- # : 1 00:10:46.641 18:09:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:46.641 18:09:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.641 18:09:05 -- scripts/common.sh@364 -- # decimal 1 00:10:46.641 18:09:05 -- scripts/common.sh@352 -- # local d=1 00:10:46.641 18:09:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.641 18:09:05 -- scripts/common.sh@354 -- # echo 1 00:10:46.641 18:09:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:46.641 18:09:05 -- scripts/common.sh@365 -- # decimal 2 00:10:46.641 18:09:05 -- scripts/common.sh@352 -- # local d=2 00:10:46.641 18:09:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.641 18:09:05 -- scripts/common.sh@354 -- # echo 2 00:10:46.641 18:09:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:46.641 18:09:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:46.641 18:09:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:46.641 18:09:05 -- scripts/common.sh@367 -- # return 0 00:10:46.641 18:09:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.641 18:09:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:46.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.641 --rc genhtml_branch_coverage=1 00:10:46.641 --rc genhtml_function_coverage=1 00:10:46.641 --rc genhtml_legend=1 00:10:46.641 --rc geninfo_all_blocks=1 00:10:46.641 --rc geninfo_unexecuted_blocks=1 00:10:46.641 00:10:46.641 ' 00:10:46.641 18:09:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:46.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.641 --rc genhtml_branch_coverage=1 00:10:46.641 --rc genhtml_function_coverage=1 00:10:46.641 --rc genhtml_legend=1 00:10:46.641 --rc geninfo_all_blocks=1 00:10:46.641 --rc geninfo_unexecuted_blocks=1 00:10:46.641 00:10:46.641 ' 00:10:46.641 18:09:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:46.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.641 --rc genhtml_branch_coverage=1 00:10:46.641 --rc genhtml_function_coverage=1 00:10:46.641 --rc genhtml_legend=1 00:10:46.641 --rc geninfo_all_blocks=1 00:10:46.641 --rc geninfo_unexecuted_blocks=1 00:10:46.641 00:10:46.641 ' 00:10:46.641 18:09:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:46.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.641 --rc genhtml_branch_coverage=1 00:10:46.641 --rc genhtml_function_coverage=1 00:10:46.641 --rc genhtml_legend=1 00:10:46.641 --rc geninfo_all_blocks=1 00:10:46.641 --rc geninfo_unexecuted_blocks=1 00:10:46.641 00:10:46.641 ' 00:10:46.641 18:09:05 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.641 18:09:05 -- nvmf/common.sh@7 -- # uname -s 00:10:46.641 18:09:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.641 18:09:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.641 18:09:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.641 18:09:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.641 18:09:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.641 18:09:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.641 18:09:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.641 18:09:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.641 18:09:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.641 18:09:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.641 18:09:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:10:46.641 18:09:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:10:46.641 18:09:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.641 18:09:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.641 18:09:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.641 18:09:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.641 18:09:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.641 18:09:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.641 18:09:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.641 18:09:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.641 18:09:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.641 18:09:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.641 18:09:05 -- paths/export.sh@5 -- # export PATH 00:10:46.642 18:09:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.642 18:09:05 -- nvmf/common.sh@46 -- # : 0 00:10:46.642 18:09:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:46.642 18:09:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:46.642 18:09:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:46.642 18:09:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.642 18:09:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.642 18:09:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:46.642 18:09:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:46.642 18:09:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:46.642 18:09:05 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.642 18:09:05 -- fips/fips.sh@89 -- # check_openssl_version 00:10:46.642 18:09:05 -- fips/fips.sh@83 -- # local target=3.0.0 00:10:46.642 18:09:05 -- fips/fips.sh@85 -- # openssl version 00:10:46.642 18:09:05 -- fips/fips.sh@85 -- # awk '{print $2}' 00:10:46.642 18:09:05 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:10:46.642 18:09:05 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:10:46.642 18:09:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:46.642 18:09:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:46.642 18:09:05 -- scripts/common.sh@335 -- # IFS=.-: 00:10:46.642 18:09:05 -- scripts/common.sh@335 -- # read -ra ver1 00:10:46.642 18:09:05 -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.642 18:09:05 -- scripts/common.sh@336 -- # read -ra ver2 00:10:46.642 18:09:05 -- scripts/common.sh@337 -- # local 'op=>=' 00:10:46.642 18:09:05 -- scripts/common.sh@339 -- # ver1_l=3 00:10:46.642 18:09:05 -- scripts/common.sh@340 -- # ver2_l=3 00:10:46.642 18:09:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:46.642 18:09:05 -- scripts/common.sh@343 -- # case "$op" in 00:10:46.642 18:09:05 -- scripts/common.sh@347 -- # : 1 00:10:46.642 18:09:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:46.642 18:09:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.642 18:09:05 -- scripts/common.sh@364 -- # decimal 3 00:10:46.642 18:09:05 -- scripts/common.sh@352 -- # local d=3 00:10:46.642 18:09:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:46.642 18:09:05 -- scripts/common.sh@354 -- # echo 3 00:10:46.642 18:09:05 -- scripts/common.sh@364 -- # ver1[v]=3 00:10:46.642 18:09:05 -- scripts/common.sh@365 -- # decimal 3 00:10:46.642 18:09:05 -- scripts/common.sh@352 -- # local d=3 00:10:46.642 18:09:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:10:46.642 18:09:05 -- scripts/common.sh@354 -- # echo 3 00:10:46.642 18:09:05 -- scripts/common.sh@365 -- # ver2[v]=3 00:10:46.642 18:09:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:46.642 18:09:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:46.642 18:09:05 -- scripts/common.sh@363 -- # (( v++ )) 00:10:46.642 18:09:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.642 18:09:05 -- scripts/common.sh@364 -- # decimal 1 00:10:46.642 18:09:05 -- scripts/common.sh@352 -- # local d=1 00:10:46.642 18:09:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.642 18:09:05 -- scripts/common.sh@354 -- # echo 1 00:10:46.642 18:09:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:46.642 18:09:05 -- scripts/common.sh@365 -- # decimal 0 00:10:46.642 18:09:05 -- scripts/common.sh@352 -- # local d=0 00:10:46.642 18:09:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:10:46.642 18:09:05 -- scripts/common.sh@354 -- # echo 0 00:10:46.642 18:09:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:10:46.642 18:09:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:46.642 18:09:05 -- scripts/common.sh@366 -- # return 0 00:10:46.642 18:09:05 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:10:46.901 18:09:05 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:10:46.901 18:09:05 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:10:46.901 18:09:05 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:10:46.901 18:09:05 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:10:46.901 18:09:05 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:10:46.901 18:09:05 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:10:46.901 18:09:05 -- fips/fips.sh@113 -- # build_openssl_config 00:10:46.901 18:09:05 -- fips/fips.sh@37 -- # cat 00:10:46.901 18:09:05 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:10:46.901 18:09:05 -- fips/fips.sh@58 -- # cat - 00:10:46.901 18:09:05 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:10:46.901 18:09:05 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:10:46.901 18:09:05 -- fips/fips.sh@116 -- # mapfile -t providers 00:10:46.901 18:09:05 -- fips/fips.sh@116 -- # grep name 00:10:46.901 18:09:05 -- fips/fips.sh@116 -- # openssl list -providers 00:10:46.901 18:09:05 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:10:46.901 18:09:05 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:10:46.901 18:09:05 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:10:46.901 18:09:05 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:10:46.901 18:09:05 -- fips/fips.sh@127 -- # : 00:10:46.901 18:09:05 -- common/autotest_common.sh@650 -- # local es=0 00:10:46.901 18:09:05 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:10:46.901 18:09:05 -- common/autotest_common.sh@638 -- # local arg=openssl 00:10:46.901 18:09:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.901 18:09:05 -- common/autotest_common.sh@642 -- # type -t openssl 00:10:46.901 18:09:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.901 18:09:05 -- common/autotest_common.sh@644 -- # type -P openssl 00:10:46.901 18:09:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.901 18:09:05 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:10:46.901 18:09:05 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:10:46.901 18:09:05 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:10:46.901 Error setting digest 00:10:46.901 40A2123F977F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:10:46.901 40A2123F977F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:10:46.902 18:09:05 -- common/autotest_common.sh@653 -- # es=1 00:10:46.902 18:09:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:46.902 18:09:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:46.902 18:09:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:46.902 18:09:05 -- fips/fips.sh@130 -- # nvmftestinit 00:10:46.902 18:09:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:46.902 18:09:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.902 18:09:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:46.902 18:09:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:46.902 18:09:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:46.902 18:09:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.902 18:09:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.902 18:09:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.902 18:09:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:46.902 18:09:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:46.902 18:09:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:46.902 18:09:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:46.902 18:09:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:46.902 18:09:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:46.902 18:09:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.902 18:09:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.902 18:09:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.902 18:09:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:46.902 18:09:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.902 18:09:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.902 18:09:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.902 18:09:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.902 18:09:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.902 18:09:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.902 18:09:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.902 18:09:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.902 18:09:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:46.902 18:09:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:46.902 Cannot find device "nvmf_tgt_br" 00:10:46.902 18:09:05 -- nvmf/common.sh@154 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.902 Cannot find device "nvmf_tgt_br2" 00:10:46.902 18:09:05 -- nvmf/common.sh@155 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:46.902 18:09:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:46.902 Cannot find device "nvmf_tgt_br" 00:10:46.902 18:09:05 -- nvmf/common.sh@157 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:46.902 Cannot find device "nvmf_tgt_br2" 00:10:46.902 18:09:05 -- nvmf/common.sh@158 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:46.902 18:09:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:46.902 18:09:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.902 18:09:05 -- nvmf/common.sh@161 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.902 18:09:05 -- nvmf/common.sh@162 -- # true 00:10:46.902 18:09:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:47.161 18:09:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:47.161 18:09:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:47.161 18:09:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.161 18:09:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.161 18:09:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.161 18:09:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.161 18:09:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:47.161 18:09:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:47.161 18:09:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:47.161 18:09:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:47.161 18:09:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:47.161 18:09:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:47.161 18:09:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.161 18:09:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.161 18:09:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.161 18:09:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:47.161 18:09:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:47.161 18:09:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.161 18:09:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.161 18:09:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.161 18:09:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.161 18:09:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.161 18:09:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:47.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:47.161 00:10:47.161 --- 10.0.0.2 ping statistics --- 00:10:47.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.161 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:47.161 18:09:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:47.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:47.161 00:10:47.161 --- 10.0.0.3 ping statistics --- 00:10:47.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.162 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:47.162 18:09:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:47.162 00:10:47.162 --- 10.0.0.1 ping statistics --- 00:10:47.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.162 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:47.162 18:09:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.162 18:09:05 -- nvmf/common.sh@421 -- # return 0 00:10:47.162 18:09:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:47.162 18:09:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.162 18:09:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:47.162 18:09:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:47.162 18:09:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.162 18:09:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:47.162 18:09:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:47.162 18:09:05 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:10:47.162 18:09:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:47.162 18:09:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.162 18:09:05 -- common/autotest_common.sh@10 -- # set +x 00:10:47.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.162 18:09:05 -- nvmf/common.sh@469 -- # nvmfpid=65865 00:10:47.162 18:09:05 -- nvmf/common.sh@470 -- # waitforlisten 65865 00:10:47.162 18:09:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.162 18:09:05 -- common/autotest_common.sh@829 -- # '[' -z 65865 ']' 00:10:47.162 18:09:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.162 18:09:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.162 18:09:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.162 18:09:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.162 18:09:05 -- common/autotest_common.sh@10 -- # set +x 00:10:47.421 [2024-11-18 18:09:05.788168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.421 [2024-11-18 18:09:05.788562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.421 [2024-11-18 18:09:05.928506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.421 [2024-11-18 18:09:05.999166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:47.421 [2024-11-18 18:09:05.999505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.421 [2024-11-18 18:09:05.999889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.421 [2024-11-18 18:09:06.000165] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.421 [2024-11-18 18:09:06.000464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.356 18:09:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.356 18:09:06 -- common/autotest_common.sh@862 -- # return 0 00:10:48.356 18:09:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:48.356 18:09:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.356 18:09:06 -- common/autotest_common.sh@10 -- # set +x 00:10:48.356 18:09:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.356 18:09:06 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:10:48.356 18:09:06 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:48.356 18:09:06 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:48.356 18:09:06 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:10:48.356 18:09:06 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:48.356 18:09:06 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:48.356 18:09:06 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:48.356 18:09:06 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:48.614 [2024-11-18 18:09:07.075239] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.614 [2024-11-18 18:09:07.091161] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:48.614 [2024-11-18 18:09:07.091445] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.614 malloc0 00:10:48.614 18:09:07 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:48.614 18:09:07 -- fips/fips.sh@147 -- # bdevperf_pid=65904 00:10:48.614 18:09:07 -- fips/fips.sh@148 -- # waitforlisten 65904 /var/tmp/bdevperf.sock 00:10:48.614 18:09:07 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:48.614 18:09:07 -- common/autotest_common.sh@829 -- # '[' -z 65904 ']' 00:10:48.614 18:09:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.614 18:09:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.614 18:09:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.614 18:09:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.614 18:09:07 -- common/autotest_common.sh@10 -- # set +x 00:10:48.872 [2024-11-18 18:09:07.224107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.872 [2024-11-18 18:09:07.224192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65904 ] 00:10:48.872 [2024-11-18 18:09:07.361186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.872 [2024-11-18 18:09:07.429909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.808 18:09:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.808 18:09:08 -- common/autotest_common.sh@862 -- # return 0 00:10:49.808 18:09:08 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:10:49.808 [2024-11-18 18:09:08.333314] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:49.808 TLSTESTn1 00:10:50.067 18:09:08 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:50.067 Running I/O for 10 seconds... 00:11:00.091 00:11:00.091 Latency(us) 00:11:00.091 [2024-11-18T18:09:18.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.091 [2024-11-18T18:09:18.695Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:00.091 Verification LBA range: start 0x0 length 0x2000 00:11:00.091 TLSTESTn1 : 10.01 6054.95 23.65 0.00 0.00 21105.59 5540.77 22758.87 00:11:00.091 [2024-11-18T18:09:18.695Z] =================================================================================================================== 00:11:00.091 [2024-11-18T18:09:18.695Z] Total : 6054.95 23.65 0.00 0.00 21105.59 5540.77 22758.87 00:11:00.091 0 00:11:00.091 18:09:18 -- fips/fips.sh@1 -- # cleanup 00:11:00.091 18:09:18 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:00.091 18:09:18 -- common/autotest_common.sh@806 -- # type=--id 00:11:00.091 18:09:18 -- common/autotest_common.sh@807 -- # id=0 00:11:00.091 18:09:18 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:00.091 18:09:18 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:00.091 18:09:18 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:00.091 18:09:18 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:00.091 18:09:18 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:00.091 18:09:18 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:00.091 nvmf_trace.0 00:11:00.091 18:09:18 -- common/autotest_common.sh@821 -- # return 0 00:11:00.091 18:09:18 -- fips/fips.sh@16 -- # killprocess 65904 00:11:00.091 18:09:18 -- common/autotest_common.sh@936 -- # '[' -z 65904 ']' 00:11:00.091 18:09:18 -- common/autotest_common.sh@940 -- # kill -0 65904 00:11:00.091 18:09:18 -- common/autotest_common.sh@941 -- # uname 00:11:00.091 18:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:00.091 18:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65904 00:11:00.091 killing process with pid 65904 00:11:00.091 Received shutdown signal, test time was about 10.000000 seconds 00:11:00.091 00:11:00.091 Latency(us) 00:11:00.091 [2024-11-18T18:09:18.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.091 [2024-11-18T18:09:18.695Z] =================================================================================================================== 00:11:00.091 [2024-11-18T18:09:18.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:00.091 18:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:00.091 18:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:00.091 18:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65904' 00:11:00.091 18:09:18 -- common/autotest_common.sh@955 -- # kill 65904 00:11:00.091 18:09:18 -- common/autotest_common.sh@960 -- # wait 65904 00:11:00.350 18:09:18 -- fips/fips.sh@17 -- # nvmftestfini 00:11:00.350 18:09:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:00.350 18:09:18 -- nvmf/common.sh@116 -- # sync 00:11:00.350 18:09:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:00.350 18:09:18 -- nvmf/common.sh@119 -- # set +e 00:11:00.350 18:09:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:00.350 18:09:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:00.350 rmmod nvme_tcp 00:11:00.350 rmmod nvme_fabrics 00:11:00.350 rmmod nvme_keyring 00:11:00.350 18:09:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:00.350 18:09:18 -- nvmf/common.sh@123 -- # set -e 00:11:00.350 18:09:18 -- nvmf/common.sh@124 -- # return 0 00:11:00.350 18:09:18 -- nvmf/common.sh@477 -- # '[' -n 65865 ']' 00:11:00.350 18:09:18 -- nvmf/common.sh@478 -- # killprocess 65865 00:11:00.350 18:09:18 -- common/autotest_common.sh@936 -- # '[' -z 65865 ']' 00:11:00.350 18:09:18 -- common/autotest_common.sh@940 -- # kill -0 65865 00:11:00.350 18:09:18 -- common/autotest_common.sh@941 -- # uname 00:11:00.350 18:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:00.350 18:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65865 00:11:00.608 killing process with pid 65865 00:11:00.608 18:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:00.608 18:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:00.608 18:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65865' 00:11:00.608 18:09:18 -- common/autotest_common.sh@955 -- # kill 65865 00:11:00.608 18:09:18 -- common/autotest_common.sh@960 -- # wait 65865 00:11:00.608 18:09:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:00.608 18:09:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:00.608 18:09:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:00.608 18:09:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.608 18:09:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:00.608 18:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.608 18:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.608 18:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.608 18:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:00.608 18:09:19 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:00.608 ************************************ 00:11:00.608 END TEST nvmf_fips 00:11:00.608 ************************************ 00:11:00.608 00:11:00.608 real 0m14.200s 00:11:00.608 user 0m19.103s 00:11:00.608 sys 0m5.781s 00:11:00.608 18:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.608 18:09:19 -- common/autotest_common.sh@10 -- # set +x 00:11:00.867 18:09:19 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:00.867 18:09:19 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:00.867 18:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.867 18:09:19 -- common/autotest_common.sh@10 -- # set +x 00:11:00.867 ************************************ 00:11:00.867 START TEST nvmf_fuzz 00:11:00.867 ************************************ 00:11:00.867 18:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:00.867 * Looking for test storage... 00:11:00.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.867 18:09:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:00.867 18:09:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:00.867 18:09:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:00.867 18:09:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:00.867 18:09:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:00.867 18:09:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:00.867 18:09:19 -- scripts/common.sh@335 -- # IFS=.-: 00:11:00.867 18:09:19 -- scripts/common.sh@335 -- # read -ra ver1 00:11:00.867 18:09:19 -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.867 18:09:19 -- scripts/common.sh@336 -- # read -ra ver2 00:11:00.867 18:09:19 -- scripts/common.sh@337 -- # local 'op=<' 00:11:00.867 18:09:19 -- scripts/common.sh@339 -- # ver1_l=2 00:11:00.867 18:09:19 -- scripts/common.sh@340 -- # ver2_l=1 00:11:00.867 18:09:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:00.867 18:09:19 -- scripts/common.sh@343 -- # case "$op" in 00:11:00.867 18:09:19 -- scripts/common.sh@344 -- # : 1 00:11:00.867 18:09:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:00.867 18:09:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.867 18:09:19 -- scripts/common.sh@364 -- # decimal 1 00:11:00.867 18:09:19 -- scripts/common.sh@352 -- # local d=1 00:11:00.867 18:09:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.867 18:09:19 -- scripts/common.sh@354 -- # echo 1 00:11:00.867 18:09:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:00.867 18:09:19 -- scripts/common.sh@365 -- # decimal 2 00:11:00.867 18:09:19 -- scripts/common.sh@352 -- # local d=2 00:11:00.867 18:09:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.867 18:09:19 -- scripts/common.sh@354 -- # echo 2 00:11:00.867 18:09:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:00.867 18:09:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:00.867 18:09:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:00.867 18:09:19 -- scripts/common.sh@367 -- # return 0 00:11:00.867 18:09:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:00.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.867 --rc genhtml_branch_coverage=1 00:11:00.867 --rc genhtml_function_coverage=1 00:11:00.867 --rc genhtml_legend=1 00:11:00.867 --rc geninfo_all_blocks=1 00:11:00.867 --rc geninfo_unexecuted_blocks=1 00:11:00.867 00:11:00.867 ' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:00.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.867 --rc genhtml_branch_coverage=1 00:11:00.867 --rc genhtml_function_coverage=1 00:11:00.867 --rc genhtml_legend=1 00:11:00.867 --rc geninfo_all_blocks=1 00:11:00.867 --rc geninfo_unexecuted_blocks=1 00:11:00.867 00:11:00.867 ' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:00.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.867 --rc genhtml_branch_coverage=1 00:11:00.867 --rc genhtml_function_coverage=1 00:11:00.867 --rc genhtml_legend=1 00:11:00.867 --rc geninfo_all_blocks=1 00:11:00.867 --rc geninfo_unexecuted_blocks=1 00:11:00.867 00:11:00.867 ' 00:11:00.867 18:09:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:00.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.867 --rc genhtml_branch_coverage=1 00:11:00.867 --rc genhtml_function_coverage=1 00:11:00.867 --rc genhtml_legend=1 00:11:00.867 --rc geninfo_all_blocks=1 00:11:00.867 --rc geninfo_unexecuted_blocks=1 00:11:00.867 00:11:00.867 ' 00:11:00.867 18:09:19 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.867 18:09:19 -- nvmf/common.sh@7 -- # uname -s 00:11:00.867 18:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.867 18:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.867 18:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.867 18:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.867 18:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.867 18:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.867 18:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.867 18:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.867 18:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.867 18:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.867 18:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:00.867 18:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:00.867 18:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.867 18:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.867 18:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.867 18:09:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.867 18:09:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.867 18:09:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.867 18:09:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.867 18:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.867 18:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.867 18:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.867 18:09:19 -- paths/export.sh@5 -- # export PATH 00:11:00.867 18:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.867 18:09:19 -- nvmf/common.sh@46 -- # : 0 00:11:00.867 18:09:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:00.867 18:09:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:00.867 18:09:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:00.867 18:09:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.867 18:09:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.867 18:09:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:00.867 18:09:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:00.867 18:09:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:00.867 18:09:19 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:00.867 18:09:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:00.867 18:09:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.868 18:09:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:00.868 18:09:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:00.868 18:09:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:00.868 18:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.868 18:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.868 18:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.868 18:09:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:00.868 18:09:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:00.868 18:09:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:00.868 18:09:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:00.868 18:09:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:00.868 18:09:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:00.868 18:09:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.868 18:09:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.868 18:09:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:00.868 18:09:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:00.868 18:09:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.868 18:09:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.868 18:09:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.868 18:09:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.868 18:09:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.868 18:09:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.868 18:09:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.868 18:09:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.868 18:09:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:00.868 18:09:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:01.126 Cannot find device "nvmf_tgt_br" 00:11:01.126 18:09:19 -- nvmf/common.sh@154 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.126 Cannot find device "nvmf_tgt_br2" 00:11:01.126 18:09:19 -- nvmf/common.sh@155 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:01.126 18:09:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:01.126 Cannot find device "nvmf_tgt_br" 00:11:01.126 18:09:19 -- nvmf/common.sh@157 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:01.126 Cannot find device "nvmf_tgt_br2" 00:11:01.126 18:09:19 -- nvmf/common.sh@158 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:01.126 18:09:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:01.126 18:09:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.126 18:09:19 -- nvmf/common.sh@161 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.126 18:09:19 -- nvmf/common.sh@162 -- # true 00:11:01.126 18:09:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.126 18:09:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.126 18:09:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.126 18:09:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.126 18:09:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.126 18:09:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.126 18:09:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.126 18:09:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:01.126 18:09:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:01.126 18:09:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:01.126 18:09:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:01.126 18:09:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:01.126 18:09:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:01.126 18:09:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.126 18:09:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.126 18:09:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.126 18:09:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:01.126 18:09:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:01.126 18:09:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.126 18:09:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.126 18:09:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.126 18:09:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.385 18:09:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.385 18:09:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:01.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:01.385 00:11:01.385 --- 10.0.0.2 ping statistics --- 00:11:01.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.385 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:01.385 18:09:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:01.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:11:01.385 00:11:01.385 --- 10.0.0.3 ping statistics --- 00:11:01.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.385 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:01.385 18:09:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:01.385 00:11:01.385 --- 10.0.0.1 ping statistics --- 00:11:01.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.385 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:01.385 18:09:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.385 18:09:19 -- nvmf/common.sh@421 -- # return 0 00:11:01.385 18:09:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:01.385 18:09:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.385 18:09:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:01.385 18:09:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:01.385 18:09:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.386 18:09:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:01.386 18:09:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:01.386 18:09:19 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66234 00:11:01.386 18:09:19 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:01.386 18:09:19 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:01.386 18:09:19 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66234 00:11:01.386 18:09:19 -- common/autotest_common.sh@829 -- # '[' -z 66234 ']' 00:11:01.386 18:09:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.386 18:09:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.386 18:09:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.386 18:09:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.386 18:09:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 18:09:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.321 18:09:20 -- common/autotest_common.sh@862 -- # return 0 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.321 18:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.321 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 18:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:02.321 18:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.321 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 Malloc0 00:11:02.321 18:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.321 18:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.321 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 18:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.321 18:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.321 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 18:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.321 18:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.321 18:09:20 -- common/autotest_common.sh@10 -- # set +x 00:11:02.321 18:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:02.321 18:09:20 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:02.580 Shutting down the fuzz application 00:11:02.580 18:09:21 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:02.840 Shutting down the fuzz application 00:11:02.840 18:09:21 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.840 18:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.840 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 18:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.100 18:09:21 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:03.100 18:09:21 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:03.100 18:09:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:03.100 18:09:21 -- nvmf/common.sh@116 -- # sync 00:11:03.100 18:09:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:03.100 18:09:21 -- nvmf/common.sh@119 -- # set +e 00:11:03.100 18:09:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:03.100 18:09:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:03.100 rmmod nvme_tcp 00:11:03.100 rmmod nvme_fabrics 00:11:03.100 rmmod nvme_keyring 00:11:03.100 18:09:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:03.100 18:09:21 -- nvmf/common.sh@123 -- # set -e 00:11:03.100 18:09:21 -- nvmf/common.sh@124 -- # return 0 00:11:03.100 18:09:21 -- nvmf/common.sh@477 -- # '[' -n 66234 ']' 00:11:03.100 18:09:21 -- nvmf/common.sh@478 -- # killprocess 66234 00:11:03.100 18:09:21 -- common/autotest_common.sh@936 -- # '[' -z 66234 ']' 00:11:03.100 18:09:21 -- common/autotest_common.sh@940 -- # kill -0 66234 00:11:03.100 18:09:21 -- common/autotest_common.sh@941 -- # uname 00:11:03.100 18:09:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.100 18:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66234 00:11:03.100 18:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:03.100 18:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:03.100 killing process with pid 66234 00:11:03.100 18:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66234' 00:11:03.100 18:09:21 -- common/autotest_common.sh@955 -- # kill 66234 00:11:03.100 18:09:21 -- common/autotest_common.sh@960 -- # wait 66234 00:11:03.359 18:09:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:03.359 18:09:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:03.359 18:09:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:03.359 18:09:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.359 18:09:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:03.359 18:09:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.359 18:09:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.359 18:09:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.359 18:09:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:03.359 18:09:21 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:03.359 00:11:03.359 real 0m2.610s 00:11:03.359 user 0m2.711s 00:11:03.359 sys 0m0.577s 00:11:03.359 18:09:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:03.359 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:11:03.359 ************************************ 00:11:03.359 END TEST nvmf_fuzz 00:11:03.359 ************************************ 00:11:03.359 18:09:21 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:03.359 18:09:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.359 18:09:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.359 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:11:03.359 ************************************ 00:11:03.359 START TEST nvmf_multiconnection 00:11:03.359 ************************************ 00:11:03.359 18:09:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:03.359 * Looking for test storage... 00:11:03.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.618 18:09:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:03.618 18:09:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:03.618 18:09:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:03.618 18:09:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:03.618 18:09:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:03.618 18:09:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:03.618 18:09:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:03.618 18:09:22 -- scripts/common.sh@335 -- # IFS=.-: 00:11:03.618 18:09:22 -- scripts/common.sh@335 -- # read -ra ver1 00:11:03.618 18:09:22 -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.618 18:09:22 -- scripts/common.sh@336 -- # read -ra ver2 00:11:03.618 18:09:22 -- scripts/common.sh@337 -- # local 'op=<' 00:11:03.618 18:09:22 -- scripts/common.sh@339 -- # ver1_l=2 00:11:03.618 18:09:22 -- scripts/common.sh@340 -- # ver2_l=1 00:11:03.618 18:09:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:03.618 18:09:22 -- scripts/common.sh@343 -- # case "$op" in 00:11:03.618 18:09:22 -- scripts/common.sh@344 -- # : 1 00:11:03.618 18:09:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:03.618 18:09:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.618 18:09:22 -- scripts/common.sh@364 -- # decimal 1 00:11:03.618 18:09:22 -- scripts/common.sh@352 -- # local d=1 00:11:03.618 18:09:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.618 18:09:22 -- scripts/common.sh@354 -- # echo 1 00:11:03.618 18:09:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:03.618 18:09:22 -- scripts/common.sh@365 -- # decimal 2 00:11:03.618 18:09:22 -- scripts/common.sh@352 -- # local d=2 00:11:03.618 18:09:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.618 18:09:22 -- scripts/common.sh@354 -- # echo 2 00:11:03.618 18:09:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:03.618 18:09:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:03.618 18:09:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:03.618 18:09:22 -- scripts/common.sh@367 -- # return 0 00:11:03.618 18:09:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.618 18:09:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:03.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.618 --rc genhtml_branch_coverage=1 00:11:03.618 --rc genhtml_function_coverage=1 00:11:03.618 --rc genhtml_legend=1 00:11:03.618 --rc geninfo_all_blocks=1 00:11:03.618 --rc geninfo_unexecuted_blocks=1 00:11:03.618 00:11:03.618 ' 00:11:03.618 18:09:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:03.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.618 --rc genhtml_branch_coverage=1 00:11:03.618 --rc genhtml_function_coverage=1 00:11:03.618 --rc genhtml_legend=1 00:11:03.618 --rc geninfo_all_blocks=1 00:11:03.618 --rc geninfo_unexecuted_blocks=1 00:11:03.618 00:11:03.618 ' 00:11:03.618 18:09:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:03.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.618 --rc genhtml_branch_coverage=1 00:11:03.618 --rc genhtml_function_coverage=1 00:11:03.618 --rc genhtml_legend=1 00:11:03.618 --rc geninfo_all_blocks=1 00:11:03.618 --rc geninfo_unexecuted_blocks=1 00:11:03.618 00:11:03.618 ' 00:11:03.618 18:09:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:03.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.618 --rc genhtml_branch_coverage=1 00:11:03.618 --rc genhtml_function_coverage=1 00:11:03.618 --rc genhtml_legend=1 00:11:03.618 --rc geninfo_all_blocks=1 00:11:03.618 --rc geninfo_unexecuted_blocks=1 00:11:03.618 00:11:03.618 ' 00:11:03.618 18:09:22 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.618 18:09:22 -- nvmf/common.sh@7 -- # uname -s 00:11:03.618 18:09:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.618 18:09:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.618 18:09:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.618 18:09:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.618 18:09:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.618 18:09:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.618 18:09:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.618 18:09:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.618 18:09:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.618 18:09:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.618 18:09:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:03.618 18:09:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:03.618 18:09:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.618 18:09:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.618 18:09:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.618 18:09:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.618 18:09:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.618 18:09:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.618 18:09:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.618 18:09:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.618 18:09:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.618 18:09:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.618 18:09:22 -- paths/export.sh@5 -- # export PATH 00:11:03.618 18:09:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.618 18:09:22 -- nvmf/common.sh@46 -- # : 0 00:11:03.618 18:09:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:03.618 18:09:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:03.618 18:09:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:03.619 18:09:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.619 18:09:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.619 18:09:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:03.619 18:09:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:03.619 18:09:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:03.619 18:09:22 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.619 18:09:22 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.619 18:09:22 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:03.619 18:09:22 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:03.619 18:09:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:03.619 18:09:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.619 18:09:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:03.619 18:09:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:03.619 18:09:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:03.619 18:09:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.619 18:09:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.619 18:09:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.619 18:09:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:03.619 18:09:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:03.619 18:09:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:03.619 18:09:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:03.619 18:09:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:03.619 18:09:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:03.619 18:09:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.619 18:09:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.619 18:09:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.619 18:09:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:03.619 18:09:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.619 18:09:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.619 18:09:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.619 18:09:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.619 18:09:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.619 18:09:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.619 18:09:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.619 18:09:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.619 18:09:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:03.619 18:09:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:03.619 Cannot find device "nvmf_tgt_br" 00:11:03.619 18:09:22 -- nvmf/common.sh@154 -- # true 00:11:03.619 18:09:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.619 Cannot find device "nvmf_tgt_br2" 00:11:03.619 18:09:22 -- nvmf/common.sh@155 -- # true 00:11:03.619 18:09:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:03.619 18:09:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:03.619 Cannot find device "nvmf_tgt_br" 00:11:03.619 18:09:22 -- nvmf/common.sh@157 -- # true 00:11:03.619 18:09:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:03.619 Cannot find device "nvmf_tgt_br2" 00:11:03.619 18:09:22 -- nvmf/common.sh@158 -- # true 00:11:03.619 18:09:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:03.619 18:09:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:03.877 18:09:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.877 18:09:22 -- nvmf/common.sh@161 -- # true 00:11:03.877 18:09:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.877 18:09:22 -- nvmf/common.sh@162 -- # true 00:11:03.877 18:09:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.877 18:09:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.877 18:09:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.877 18:09:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.877 18:09:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.877 18:09:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.877 18:09:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.877 18:09:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:03.877 18:09:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:03.877 18:09:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:03.877 18:09:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:03.877 18:09:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:03.877 18:09:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:03.877 18:09:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.877 18:09:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.877 18:09:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.877 18:09:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:03.877 18:09:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:03.877 18:09:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.877 18:09:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.877 18:09:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.877 18:09:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.878 18:09:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.878 18:09:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:03.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:11:03.878 00:11:03.878 --- 10.0.0.2 ping statistics --- 00:11:03.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.878 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:03.878 18:09:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:03.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:03.878 00:11:03.878 --- 10.0.0.3 ping statistics --- 00:11:03.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.878 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:03.878 18:09:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:03.878 00:11:03.878 --- 10.0.0.1 ping statistics --- 00:11:03.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.878 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:03.878 18:09:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.878 18:09:22 -- nvmf/common.sh@421 -- # return 0 00:11:03.878 18:09:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:03.878 18:09:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.878 18:09:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:03.878 18:09:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:03.878 18:09:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.878 18:09:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:03.878 18:09:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:03.878 18:09:22 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:03.878 18:09:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:03.878 18:09:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.878 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:11:03.878 18:09:22 -- nvmf/common.sh@469 -- # nvmfpid=66430 00:11:03.878 18:09:22 -- nvmf/common.sh@470 -- # waitforlisten 66430 00:11:03.878 18:09:22 -- common/autotest_common.sh@829 -- # '[' -z 66430 ']' 00:11:03.878 18:09:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.878 18:09:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.878 18:09:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.878 18:09:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.878 18:09:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.878 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:11:04.136 [2024-11-18 18:09:22.493432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:04.136 [2024-11-18 18:09:22.493544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.136 [2024-11-18 18:09:22.636306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.136 [2024-11-18 18:09:22.705550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.136 [2024-11-18 18:09:22.705707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.136 [2024-11-18 18:09:22.705721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.136 [2024-11-18 18:09:22.705732] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.136 [2024-11-18 18:09:22.705875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.136 [2024-11-18 18:09:22.705989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.136 [2024-11-18 18:09:22.706120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.136 [2024-11-18 18:09:22.706126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.073 18:09:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.073 18:09:23 -- common/autotest_common.sh@862 -- # return 0 00:11:05.073 18:09:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:05.073 18:09:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.073 18:09:23 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 [2024-11-18 18:09:23.543175] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:05.073 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.073 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 Malloc1 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 [2024-11-18 18:09:23.601637] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.073 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 Malloc2 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.073 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 Malloc3 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.073 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.073 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:05.073 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.073 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.332 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 Malloc4 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.332 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 Malloc5 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:05.332 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.332 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.332 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.332 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.332 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 Malloc6 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.333 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 Malloc7 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.333 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 Malloc8 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.333 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 Malloc9 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.333 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.333 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.333 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:05.333 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.333 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 Malloc10 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.592 18:09:23 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 Malloc11 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:23 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:05.592 18:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.592 18:09:23 -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 18:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.592 18:09:24 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:05.592 18:09:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:05.592 18:09:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.592 18:09:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:05.592 18:09:24 -- common/autotest_common.sh@1187 -- # local i=0 00:11:05.592 18:09:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.592 18:09:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:05.592 18:09:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:08.128 18:09:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:08.128 18:09:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:08.128 18:09:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:11:08.128 18:09:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:08.128 18:09:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.128 18:09:26 -- common/autotest_common.sh@1197 -- # return 0 00:11:08.128 18:09:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:08.128 18:09:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:08.128 18:09:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:08.128 18:09:26 -- common/autotest_common.sh@1187 -- # local i=0 00:11:08.128 18:09:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.128 18:09:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:08.128 18:09:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:10.032 18:09:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:10.032 18:09:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:10.032 18:09:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:11:10.032 18:09:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:10.032 18:09:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.033 18:09:28 -- common/autotest_common.sh@1197 -- # return 0 00:11:10.033 18:09:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:10.033 18:09:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:10.033 18:09:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:10.033 18:09:28 -- common/autotest_common.sh@1187 -- # local i=0 00:11:10.033 18:09:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.033 18:09:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:10.033 18:09:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:11.937 18:09:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:11.937 18:09:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:11.937 18:09:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:11:11.937 18:09:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:11.937 18:09:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.937 18:09:30 -- common/autotest_common.sh@1197 -- # return 0 00:11:11.937 18:09:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:11.937 18:09:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:11:12.199 18:09:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:11:12.199 18:09:30 -- common/autotest_common.sh@1187 -- # local i=0 00:11:12.199 18:09:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.199 18:09:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:12.199 18:09:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:14.149 18:09:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:14.149 18:09:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:14.149 18:09:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:11:14.149 18:09:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:14.149 18:09:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.149 18:09:32 -- common/autotest_common.sh@1197 -- # return 0 00:11:14.149 18:09:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:14.149 18:09:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:11:14.408 18:09:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:11:14.408 18:09:32 -- common/autotest_common.sh@1187 -- # local i=0 00:11:14.408 18:09:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.408 18:09:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:14.408 18:09:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:16.312 18:09:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:16.312 18:09:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:16.313 18:09:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:11:16.313 18:09:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:16.313 18:09:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.313 18:09:34 -- common/autotest_common.sh@1197 -- # return 0 00:11:16.313 18:09:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:16.313 18:09:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:11:16.571 18:09:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:11:16.571 18:09:34 -- common/autotest_common.sh@1187 -- # local i=0 00:11:16.571 18:09:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.571 18:09:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:16.571 18:09:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:18.477 18:09:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:18.477 18:09:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:18.477 18:09:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:11:18.477 18:09:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:18.477 18:09:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.477 18:09:36 -- common/autotest_common.sh@1197 -- # return 0 00:11:18.477 18:09:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:18.477 18:09:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:11:18.735 18:09:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:11:18.735 18:09:37 -- common/autotest_common.sh@1187 -- # local i=0 00:11:18.735 18:09:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.735 18:09:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:18.735 18:09:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:20.638 18:09:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:20.638 18:09:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:20.639 18:09:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:11:20.639 18:09:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:20.639 18:09:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.639 18:09:39 -- common/autotest_common.sh@1197 -- # return 0 00:11:20.639 18:09:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:20.639 18:09:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:11:20.897 18:09:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:11:20.897 18:09:39 -- common/autotest_common.sh@1187 -- # local i=0 00:11:20.897 18:09:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.897 18:09:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:20.897 18:09:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:22.799 18:09:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:22.799 18:09:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:22.799 18:09:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:11:22.799 18:09:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:22.799 18:09:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.799 18:09:41 -- common/autotest_common.sh@1197 -- # return 0 00:11:22.799 18:09:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:22.799 18:09:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:11:23.058 18:09:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:11:23.058 18:09:41 -- common/autotest_common.sh@1187 -- # local i=0 00:11:23.058 18:09:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.058 18:09:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:23.058 18:09:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:24.961 18:09:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:24.961 18:09:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:24.961 18:09:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:11:24.961 18:09:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:24.961 18:09:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.961 18:09:43 -- common/autotest_common.sh@1197 -- # return 0 00:11:24.961 18:09:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:24.961 18:09:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:11:25.221 18:09:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:11:25.221 18:09:43 -- common/autotest_common.sh@1187 -- # local i=0 00:11:25.221 18:09:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.221 18:09:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:25.221 18:09:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:27.124 18:09:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:27.124 18:09:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:27.124 18:09:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:11:27.124 18:09:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:27.124 18:09:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.124 18:09:45 -- common/autotest_common.sh@1197 -- # return 0 00:11:27.124 18:09:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:27.124 18:09:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:11:27.384 18:09:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:11:27.384 18:09:45 -- common/autotest_common.sh@1187 -- # local i=0 00:11:27.384 18:09:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.384 18:09:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:27.384 18:09:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:29.290 18:09:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:29.290 18:09:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:29.290 18:09:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:11:29.290 18:09:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:29.290 18:09:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.290 18:09:47 -- common/autotest_common.sh@1197 -- # return 0 00:11:29.290 18:09:47 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:11:29.290 [global] 00:11:29.290 thread=1 00:11:29.290 invalidate=1 00:11:29.290 rw=read 00:11:29.290 time_based=1 00:11:29.290 runtime=10 00:11:29.290 ioengine=libaio 00:11:29.290 direct=1 00:11:29.290 bs=262144 00:11:29.290 iodepth=64 00:11:29.290 norandommap=1 00:11:29.290 numjobs=1 00:11:29.290 00:11:29.290 [job0] 00:11:29.290 filename=/dev/nvme0n1 00:11:29.290 [job1] 00:11:29.290 filename=/dev/nvme10n1 00:11:29.290 [job2] 00:11:29.290 filename=/dev/nvme1n1 00:11:29.290 [job3] 00:11:29.290 filename=/dev/nvme2n1 00:11:29.290 [job4] 00:11:29.290 filename=/dev/nvme3n1 00:11:29.290 [job5] 00:11:29.290 filename=/dev/nvme4n1 00:11:29.290 [job6] 00:11:29.290 filename=/dev/nvme5n1 00:11:29.290 [job7] 00:11:29.290 filename=/dev/nvme6n1 00:11:29.290 [job8] 00:11:29.290 filename=/dev/nvme7n1 00:11:29.290 [job9] 00:11:29.290 filename=/dev/nvme8n1 00:11:29.290 [job10] 00:11:29.290 filename=/dev/nvme9n1 00:11:29.562 Could not set queue depth (nvme0n1) 00:11:29.562 Could not set queue depth (nvme10n1) 00:11:29.562 Could not set queue depth (nvme1n1) 00:11:29.562 Could not set queue depth (nvme2n1) 00:11:29.562 Could not set queue depth (nvme3n1) 00:11:29.562 Could not set queue depth (nvme4n1) 00:11:29.562 Could not set queue depth (nvme5n1) 00:11:29.562 Could not set queue depth (nvme6n1) 00:11:29.562 Could not set queue depth (nvme7n1) 00:11:29.562 Could not set queue depth (nvme8n1) 00:11:29.562 Could not set queue depth (nvme9n1) 00:11:29.562 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:29.562 fio-3.35 00:11:29.562 Starting 11 threads 00:11:41.786 00:11:41.786 job0: (groupid=0, jobs=1): err= 0: pid=66890: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=450, BW=113MiB/s (118MB/s)(1139MiB/10111msec) 00:11:41.786 slat (usec): min=19, max=50373, avg=2193.14, stdev=4903.30 00:11:41.786 clat (msec): min=38, max=236, avg=139.64, stdev=11.06 00:11:41.786 lat (msec): min=39, max=236, avg=141.84, stdev=11.25 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 109], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 134], 00:11:41.786 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:11:41.786 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 155], 00:11:41.786 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 222], 99.95th=[ 236], 00:11:41.786 | 99.99th=[ 236] 00:11:41.786 bw ( KiB/s): min=108544, max=126722, per=6.79%, avg=115028.30, stdev=3802.94, samples=20 00:11:41.786 iops : min= 424, max= 495, avg=449.10, stdev=14.83, samples=20 00:11:41.786 lat (msec) : 50=0.11%, 100=0.35%, 250=99.54% 00:11:41.786 cpu : usr=0.21%, sys=1.70%, ctx=1058, majf=0, minf=4097 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=4556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job1: (groupid=0, jobs=1): err= 0: pid=66891: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=541, BW=135MiB/s (142MB/s)(1363MiB/10077msec) 00:11:41.786 slat (usec): min=20, max=69568, avg=1834.02, stdev=4550.14 00:11:41.786 clat (msec): min=53, max=194, avg=116.31, stdev=14.08 00:11:41.786 lat (msec): min=80, max=194, avg=118.14, stdev=14.11 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 94], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 107], 00:11:41.786 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 117], 00:11:41.786 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 140], 00:11:41.786 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 194], 00:11:41.786 | 99.99th=[ 194] 00:11:41.786 bw ( KiB/s): min=97792, max=147456, per=8.14%, avg=137907.75, stdev=12362.98, samples=20 00:11:41.786 iops : min= 382, max= 576, avg=538.65, stdev=48.30, samples=20 00:11:41.786 lat (msec) : 100=5.65%, 250=94.35% 00:11:41.786 cpu : usr=0.23%, sys=2.04%, ctx=1092, majf=0, minf=4097 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=5452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job2: (groupid=0, jobs=1): err= 0: pid=66892: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=452, BW=113MiB/s (119MB/s)(1145MiB/10110msec) 00:11:41.786 slat (usec): min=21, max=67899, avg=2183.77, stdev=4983.17 00:11:41.786 clat (msec): min=43, max=230, avg=138.97, stdev=12.18 00:11:41.786 lat (msec): min=43, max=243, avg=141.15, stdev=12.51 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 90], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 134], 00:11:41.786 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:11:41.786 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 155], 00:11:41.786 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 232], 00:11:41.786 | 99.99th=[ 232] 00:11:41.786 bw ( KiB/s): min=110371, max=121344, per=6.82%, avg=115593.75, stdev=2692.79, samples=20 00:11:41.786 iops : min= 431, max= 474, avg=451.30, stdev=10.51, samples=20 00:11:41.786 lat (msec) : 50=0.13%, 100=1.27%, 250=98.60% 00:11:41.786 cpu : usr=0.29%, sys=1.99%, ctx=1026, majf=0, minf=4097 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=4578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job3: (groupid=0, jobs=1): err= 0: pid=66893: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=546, BW=137MiB/s (143MB/s)(1377MiB/10079msec) 00:11:41.786 slat (usec): min=18, max=59704, avg=1804.10, stdev=4168.09 00:11:41.786 clat (msec): min=17, max=188, avg=115.18, stdev=15.01 00:11:41.786 lat (msec): min=17, max=195, avg=116.98, stdev=15.02 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 80], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:11:41.786 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:11:41.786 | 70.00th=[ 120], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 142], 00:11:41.786 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 188], 00:11:41.786 | 99.99th=[ 188] 00:11:41.786 bw ( KiB/s): min=116224, max=146944, per=8.22%, avg=139395.40, stdev=7704.23, samples=20 00:11:41.786 iops : min= 454, max= 574, avg=544.40, stdev=30.07, samples=20 00:11:41.786 lat (msec) : 20=0.04%, 50=0.22%, 100=7.19%, 250=92.55% 00:11:41.786 cpu : usr=0.42%, sys=2.24%, ctx=1187, majf=0, minf=4098 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=5507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job4: (groupid=0, jobs=1): err= 0: pid=66894: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=1057, BW=264MiB/s (277MB/s)(2646MiB/10013msec) 00:11:41.786 slat (usec): min=20, max=27959, avg=930.09, stdev=2283.62 00:11:41.786 clat (msec): min=6, max=141, avg=59.56, stdev=13.81 00:11:41.786 lat (msec): min=6, max=143, avg=60.49, stdev=13.98 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 37], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:11:41.786 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:11:41.786 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 63], 95.00th=[ 66], 00:11:41.786 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 140], 00:11:41.786 | 99.99th=[ 142] 00:11:41.786 bw ( KiB/s): min=144384, max=289280, per=15.89%, avg=269335.50, stdev=36670.09, samples=20 00:11:41.786 iops : min= 564, max= 1130, avg=1052.05, stdev=143.23, samples=20 00:11:41.786 lat (msec) : 10=0.03%, 20=0.55%, 50=3.75%, 100=91.89%, 250=3.78% 00:11:41.786 cpu : usr=0.43%, sys=3.57%, ctx=2091, majf=0, minf=4097 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=10585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job5: (groupid=0, jobs=1): err= 0: pid=66895: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=453, BW=113MiB/s (119MB/s)(1145MiB/10106msec) 00:11:41.786 slat (usec): min=20, max=58156, avg=2180.12, stdev=4903.59 00:11:41.786 clat (msec): min=54, max=238, avg=138.89, stdev=12.29 00:11:41.786 lat (msec): min=54, max=238, avg=141.07, stdev=12.54 00:11:41.786 clat percentiles (msec): 00:11:41.786 | 1.00th=[ 108], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 133], 00:11:41.786 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 140], 00:11:41.786 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 155], 00:11:41.786 | 99.00th=[ 171], 99.50th=[ 182], 99.90th=[ 239], 99.95th=[ 239], 00:11:41.786 | 99.99th=[ 239] 00:11:41.786 bw ( KiB/s): min=102605, max=123145, per=6.82%, avg=115654.35, stdev=4118.73, samples=20 00:11:41.786 iops : min= 400, max= 481, avg=451.50, stdev=16.20, samples=20 00:11:41.786 lat (msec) : 100=0.68%, 250=99.32% 00:11:41.786 cpu : usr=0.19%, sys=1.61%, ctx=1082, majf=0, minf=4097 00:11:41.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:41.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.786 issued rwts: total=4580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.786 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.786 job6: (groupid=0, jobs=1): err= 0: pid=66896: Mon Nov 18 18:09:58 2024 00:11:41.786 read: IOPS=446, BW=112MiB/s (117MB/s)(1130MiB/10110msec) 00:11:41.787 slat (usec): min=19, max=59522, avg=2216.07, stdev=5117.03 00:11:41.787 clat (msec): min=17, max=240, avg=140.85, stdev=11.62 00:11:41.787 lat (msec): min=18, max=240, avg=143.06, stdev=11.72 00:11:41.787 clat percentiles (msec): 00:11:41.787 | 1.00th=[ 116], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 134], 00:11:41.787 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:11:41.787 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 159], 00:11:41.787 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 213], 99.95th=[ 236], 00:11:41.787 | 99.99th=[ 241] 00:11:41.787 bw ( KiB/s): min=101579, max=120590, per=6.73%, avg=114055.70, stdev=4651.46, samples=20 00:11:41.787 iops : min= 396, max= 471, avg=445.30, stdev=18.22, samples=20 00:11:41.787 lat (msec) : 20=0.02%, 50=0.09%, 100=0.15%, 250=99.73% 00:11:41.787 cpu : usr=0.16%, sys=1.95%, ctx=1023, majf=0, minf=4097 00:11:41.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:41.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.787 issued rwts: total=4518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.787 job7: (groupid=0, jobs=1): err= 0: pid=66897: Mon Nov 18 18:09:58 2024 00:11:41.787 read: IOPS=549, BW=137MiB/s (144MB/s)(1385MiB/10080msec) 00:11:41.787 slat (usec): min=21, max=58501, avg=1801.04, stdev=4281.68 00:11:41.787 clat (msec): min=41, max=192, avg=114.50, stdev=14.40 00:11:41.787 lat (msec): min=41, max=192, avg=116.30, stdev=14.50 00:11:41.787 clat percentiles (msec): 00:11:41.787 | 1.00th=[ 72], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 107], 00:11:41.787 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:11:41.787 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 134], 00:11:41.787 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 192], 00:11:41.787 | 99.99th=[ 192] 00:11:41.787 bw ( KiB/s): min=114176, max=152064, per=8.27%, avg=140239.35, stdev=8362.46, samples=20 00:11:41.787 iops : min= 446, max= 594, avg=547.70, stdev=32.64, samples=20 00:11:41.787 lat (msec) : 50=0.79%, 100=5.41%, 250=93.79% 00:11:41.787 cpu : usr=0.31%, sys=2.19%, ctx=1142, majf=0, minf=4097 00:11:41.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:41.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.787 issued rwts: total=5541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.787 job8: (groupid=0, jobs=1): err= 0: pid=66898: Mon Nov 18 18:09:58 2024 00:11:41.787 read: IOPS=450, BW=113MiB/s (118MB/s)(1138MiB/10105msec) 00:11:41.787 slat (usec): min=19, max=81108, avg=2197.02, stdev=5047.36 00:11:41.787 clat (msec): min=54, max=234, avg=139.71, stdev=12.11 00:11:41.787 lat (msec): min=55, max=234, avg=141.91, stdev=12.35 00:11:41.787 clat percentiles (msec): 00:11:41.787 | 1.00th=[ 111], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 136], 00:11:41.787 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:11:41.787 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 155], 00:11:41.787 | 99.00th=[ 171], 99.50th=[ 188], 99.90th=[ 228], 99.95th=[ 234], 00:11:41.787 | 99.99th=[ 234] 00:11:41.787 bw ( KiB/s): min=100553, max=121344, per=6.78%, avg=114891.30, stdev=4575.57, samples=20 00:11:41.787 iops : min= 392, max= 474, avg=448.75, stdev=18.00, samples=20 00:11:41.787 lat (msec) : 100=0.68%, 250=99.32% 00:11:41.787 cpu : usr=0.24%, sys=1.97%, ctx=1014, majf=0, minf=4097 00:11:41.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:41.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.787 issued rwts: total=4552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.787 job9: (groupid=0, jobs=1): err= 0: pid=66899: Mon Nov 18 18:09:58 2024 00:11:41.787 read: IOPS=1155, BW=289MiB/s (303MB/s)(2892MiB/10013msec) 00:11:41.787 slat (usec): min=19, max=48203, avg=859.46, stdev=2103.58 00:11:41.787 clat (msec): min=11, max=111, avg=54.48, stdev= 9.76 00:11:41.787 lat (msec): min=15, max=111, avg=55.33, stdev= 9.82 00:11:41.787 clat percentiles (msec): 00:11:41.787 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 53], 00:11:41.787 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:11:41.787 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:11:41.787 | 99.00th=[ 67], 99.50th=[ 85], 99.90th=[ 105], 99.95th=[ 109], 00:11:41.787 | 99.99th=[ 112] 00:11:41.787 bw ( KiB/s): min=269824, max=437248, per=17.38%, avg=294560.00, stdev=43945.15, samples=20 00:11:41.787 iops : min= 1054, max= 1708, avg=1150.50, stdev=171.70, samples=20 00:11:41.787 lat (msec) : 20=0.05%, 50=14.71%, 100=84.88%, 250=0.35% 00:11:41.787 cpu : usr=0.64%, sys=4.68%, ctx=2246, majf=0, minf=4097 00:11:41.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:41.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.787 issued rwts: total=11567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.787 job10: (groupid=0, jobs=1): err= 0: pid=66900: Mon Nov 18 18:09:58 2024 00:11:41.787 read: IOPS=547, BW=137MiB/s (144MB/s)(1380MiB/10078msec) 00:11:41.787 slat (usec): min=17, max=75190, avg=1805.24, stdev=4293.88 00:11:41.787 clat (msec): min=56, max=189, avg=114.99, stdev=13.32 00:11:41.787 lat (msec): min=56, max=189, avg=116.79, stdev=13.32 00:11:41.787 clat percentiles (msec): 00:11:41.787 | 1.00th=[ 68], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 107], 00:11:41.787 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 117], 00:11:41.787 | 70.00th=[ 120], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 138], 00:11:41.787 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 174], 00:11:41.787 | 99.99th=[ 190] 00:11:41.787 bw ( KiB/s): min=114404, max=146432, per=8.24%, avg=139678.55, stdev=7600.06, samples=20 00:11:41.787 iops : min= 446, max= 572, avg=545.50, stdev=29.84, samples=20 00:11:41.787 lat (msec) : 100=7.77%, 250=92.23% 00:11:41.787 cpu : usr=0.38%, sys=2.32%, ctx=1135, majf=0, minf=4097 00:11:41.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:41.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:41.787 issued rwts: total=5518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.787 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:41.787 00:11:41.787 Run status group 0 (all jobs): 00:11:41.787 READ: bw=1655MiB/s (1736MB/s), 112MiB/s-289MiB/s (117MB/s-303MB/s), io=16.3GiB (17.6GB), run=10013-10111msec 00:11:41.787 00:11:41.787 Disk stats (read/write): 00:11:41.787 nvme0n1: ios=8985/0, merge=0/0, ticks=1230462/0, in_queue=1230462, util=97.76% 00:11:41.787 nvme10n1: ios=10780/0, merge=0/0, ticks=1235328/0, in_queue=1235328, util=97.89% 00:11:41.787 nvme1n1: ios=9029/0, merge=0/0, ticks=1230749/0, in_queue=1230749, util=98.13% 00:11:41.787 nvme2n1: ios=10892/0, merge=0/0, ticks=1234025/0, in_queue=1234025, util=98.26% 00:11:41.787 nvme3n1: ios=21042/0, merge=0/0, ticks=1241097/0, in_queue=1241097, util=98.20% 00:11:41.787 nvme4n1: ios=9032/0, merge=0/0, ticks=1230242/0, in_queue=1230242, util=98.45% 00:11:41.787 nvme5n1: ios=8912/0, merge=0/0, ticks=1232020/0, in_queue=1232020, util=98.54% 00:11:41.787 nvme6n1: ios=10968/0, merge=0/0, ticks=1235488/0, in_queue=1235488, util=98.70% 00:11:41.787 nvme7n1: ios=8976/0, merge=0/0, ticks=1230960/0, in_queue=1230960, util=98.87% 00:11:41.787 nvme8n1: ios=23040/0, merge=0/0, ticks=1240512/0, in_queue=1240512, util=99.09% 00:11:41.787 nvme9n1: ios=10911/0, merge=0/0, ticks=1235308/0, in_queue=1235308, util=99.17% 00:11:41.787 18:09:58 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:11:41.787 [global] 00:11:41.787 thread=1 00:11:41.787 invalidate=1 00:11:41.787 rw=randwrite 00:11:41.787 time_based=1 00:11:41.787 runtime=10 00:11:41.787 ioengine=libaio 00:11:41.787 direct=1 00:11:41.787 bs=262144 00:11:41.787 iodepth=64 00:11:41.787 norandommap=1 00:11:41.787 numjobs=1 00:11:41.787 00:11:41.787 [job0] 00:11:41.787 filename=/dev/nvme0n1 00:11:41.787 [job1] 00:11:41.787 filename=/dev/nvme10n1 00:11:41.787 [job2] 00:11:41.787 filename=/dev/nvme1n1 00:11:41.787 [job3] 00:11:41.787 filename=/dev/nvme2n1 00:11:41.787 [job4] 00:11:41.787 filename=/dev/nvme3n1 00:11:41.787 [job5] 00:11:41.787 filename=/dev/nvme4n1 00:11:41.787 [job6] 00:11:41.787 filename=/dev/nvme5n1 00:11:41.787 [job7] 00:11:41.787 filename=/dev/nvme6n1 00:11:41.787 [job8] 00:11:41.787 filename=/dev/nvme7n1 00:11:41.787 [job9] 00:11:41.787 filename=/dev/nvme8n1 00:11:41.787 [job10] 00:11:41.787 filename=/dev/nvme9n1 00:11:41.787 Could not set queue depth (nvme0n1) 00:11:41.787 Could not set queue depth (nvme10n1) 00:11:41.787 Could not set queue depth (nvme1n1) 00:11:41.787 Could not set queue depth (nvme2n1) 00:11:41.787 Could not set queue depth (nvme3n1) 00:11:41.787 Could not set queue depth (nvme4n1) 00:11:41.787 Could not set queue depth (nvme5n1) 00:11:41.787 Could not set queue depth (nvme6n1) 00:11:41.787 Could not set queue depth (nvme7n1) 00:11:41.787 Could not set queue depth (nvme8n1) 00:11:41.787 Could not set queue depth (nvme9n1) 00:11:41.787 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.787 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.788 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.788 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.788 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.788 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:11:41.788 fio-3.35 00:11:41.788 Starting 11 threads 00:11:51.783 00:11:51.783 job0: (groupid=0, jobs=1): err= 0: pid=67107: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=646, BW=162MiB/s (170MB/s)(1629MiB/10078msec); 0 zone resets 00:11:51.783 slat (usec): min=16, max=62187, avg=1528.73, stdev=2733.86 00:11:51.783 clat (msec): min=68, max=206, avg=97.40, stdev=17.89 00:11:51.783 lat (msec): min=68, max=206, avg=98.92, stdev=17.97 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 85], 20.00th=[ 87], 00:11:51.783 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:11:51.783 | 70.00th=[ 92], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 127], 00:11:51.783 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 199], 99.95th=[ 199], 00:11:51.783 | 99.99th=[ 207] 00:11:51.783 bw ( KiB/s): min=90112, max=183808, per=13.72%, avg=165222.40, stdev=27595.10, samples=20 00:11:51.783 iops : min= 352, max= 718, avg=645.40, stdev=107.79, samples=20 00:11:51.783 lat (msec) : 100=77.69%, 250=22.31% 00:11:51.783 cpu : usr=1.17%, sys=1.77%, ctx=7300, majf=0, minf=1 00:11:51.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:51.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.783 issued rwts: total=0,6517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.783 job1: (groupid=0, jobs=1): err= 0: pid=67113: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=316, BW=79.2MiB/s (83.1MB/s)(810MiB/10227msec); 0 zone resets 00:11:51.783 slat (usec): min=16, max=84572, avg=3081.04, stdev=5606.41 00:11:51.783 clat (msec): min=86, max=505, avg=198.77, stdev=40.27 00:11:51.783 lat (msec): min=86, max=505, avg=201.85, stdev=40.41 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 136], 5.00th=[ 148], 10.00th=[ 155], 20.00th=[ 159], 00:11:51.783 | 30.00th=[ 167], 40.00th=[ 203], 50.00th=[ 211], 60.00th=[ 215], 00:11:51.783 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 230], 00:11:51.783 | 99.00th=[ 363], 99.50th=[ 435], 99.90th=[ 489], 99.95th=[ 506], 00:11:51.783 | 99.99th=[ 506] 00:11:51.783 bw ( KiB/s): min=57856, max=106496, per=6.75%, avg=81356.80, stdev=12829.50, samples=20 00:11:51.783 iops : min= 226, max= 416, avg=317.80, stdev=50.12, samples=20 00:11:51.783 lat (msec) : 100=0.25%, 250=95.96%, 500=3.73%, 750=0.06% 00:11:51.783 cpu : usr=0.57%, sys=0.96%, ctx=3368, majf=0, minf=1 00:11:51.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:51.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.783 issued rwts: total=0,3241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.783 job2: (groupid=0, jobs=1): err= 0: pid=67119: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=524, BW=131MiB/s (137MB/s)(1318MiB/10052msec); 0 zone resets 00:11:51.783 slat (usec): min=17, max=43617, avg=1891.21, stdev=3348.98 00:11:51.783 clat (msec): min=45, max=158, avg=120.13, stdev=24.51 00:11:51.783 lat (msec): min=45, max=158, avg=122.02, stdev=24.69 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 54], 5.00th=[ 57], 10.00th=[ 87], 20.00th=[ 115], 00:11:51.783 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 124], 00:11:51.783 | 70.00th=[ 125], 80.00th=[ 127], 90.00th=[ 155], 95.00th=[ 157], 00:11:51.783 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:11:51.783 | 99.99th=[ 159] 00:11:51.783 bw ( KiB/s): min=104448, max=266752, per=11.07%, avg=133324.80, stdev=33537.24, samples=20 00:11:51.783 iops : min= 408, max= 1042, avg=520.80, stdev=131.00, samples=20 00:11:51.783 lat (msec) : 50=0.08%, 100=10.64%, 250=89.28% 00:11:51.783 cpu : usr=0.87%, sys=1.64%, ctx=6050, majf=0, minf=1 00:11:51.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:51.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.783 issued rwts: total=0,5271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.783 job3: (groupid=0, jobs=1): err= 0: pid=67120: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=479, BW=120MiB/s (126MB/s)(1227MiB/10243msec); 0 zone resets 00:11:51.783 slat (usec): min=16, max=23622, avg=2014.59, stdev=3564.24 00:11:51.783 clat (msec): min=8, max=513, avg=131.48, stdev=36.81 00:11:51.783 lat (msec): min=8, max=513, avg=133.49, stdev=37.00 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 80], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 118], 00:11:51.783 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 123], 60.00th=[ 124], 00:11:51.783 | 70.00th=[ 125], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 157], 00:11:51.783 | 99.00th=[ 305], 99.50th=[ 409], 99.90th=[ 498], 99.95th=[ 498], 00:11:51.783 | 99.99th=[ 514] 00:11:51.783 bw ( KiB/s): min=68608, max=136192, per=10.30%, avg=124032.00, stdev=17606.50, samples=20 00:11:51.783 iops : min= 268, max= 532, avg=484.50, stdev=68.78, samples=20 00:11:51.783 lat (msec) : 10=0.08%, 20=0.16%, 50=0.41%, 100=0.57%, 250=96.68% 00:11:51.783 lat (msec) : 500=2.06%, 750=0.04% 00:11:51.783 cpu : usr=0.87%, sys=1.15%, ctx=5317, majf=0, minf=1 00:11:51.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:51.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.783 issued rwts: total=0,4908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.783 job4: (groupid=0, jobs=1): err= 0: pid=67121: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=323, BW=80.8MiB/s (84.7MB/s)(827MiB/10229msec); 0 zone resets 00:11:51.783 slat (usec): min=20, max=42255, avg=3024.10, stdev=5371.17 00:11:51.783 clat (msec): min=45, max=494, avg=194.90, stdev=43.07 00:11:51.783 lat (msec): min=45, max=494, avg=197.92, stdev=43.33 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 109], 5.00th=[ 144], 10.00th=[ 146], 20.00th=[ 153], 00:11:51.783 | 30.00th=[ 155], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 215], 00:11:51.783 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 224], 95.00th=[ 228], 00:11:51.783 | 99.00th=[ 355], 99.50th=[ 426], 99.90th=[ 477], 99.95th=[ 493], 00:11:51.783 | 99.99th=[ 493] 00:11:51.783 bw ( KiB/s): min=59392, max=108544, per=6.89%, avg=82995.20, stdev=14796.25, samples=20 00:11:51.783 iops : min= 232, max= 424, avg=324.20, stdev=57.80, samples=20 00:11:51.783 lat (msec) : 50=0.06%, 100=0.73%, 250=95.49%, 500=3.72% 00:11:51.783 cpu : usr=0.62%, sys=0.90%, ctx=4078, majf=0, minf=1 00:11:51.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:51.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.783 issued rwts: total=0,3306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.783 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.783 job5: (groupid=0, jobs=1): err= 0: pid=67122: Mon Nov 18 18:10:09 2024 00:11:51.783 write: IOPS=649, BW=162MiB/s (170MB/s)(1637MiB/10084msec); 0 zone resets 00:11:51.783 slat (usec): min=17, max=16972, avg=1522.78, stdev=2638.98 00:11:51.783 clat (msec): min=18, max=168, avg=97.02, stdev=17.74 00:11:51.783 lat (msec): min=18, max=168, avg=98.54, stdev=17.82 00:11:51.783 clat percentiles (msec): 00:11:51.783 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 85], 20.00th=[ 87], 00:11:51.783 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 91], 00:11:51.783 | 70.00th=[ 92], 80.00th=[ 115], 90.00th=[ 125], 95.00th=[ 127], 00:11:51.783 | 99.00th=[ 159], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 165], 00:11:51.783 | 99.99th=[ 169] 00:11:51.783 bw ( KiB/s): min=106709, max=183296, per=13.78%, avg=166001.05, stdev=25498.66, samples=20 00:11:51.784 iops : min= 416, max= 716, avg=648.40, stdev=99.71, samples=20 00:11:51.784 lat (msec) : 20=0.06%, 50=0.31%, 100=77.30%, 250=22.33% 00:11:51.784 cpu : usr=0.95%, sys=1.74%, ctx=9463, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,6547,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 job6: (groupid=0, jobs=1): err= 0: pid=67123: Mon Nov 18 18:10:09 2024 00:11:51.784 write: IOPS=317, BW=79.3MiB/s (83.1MB/s)(812MiB/10241msec); 0 zone resets 00:11:51.784 slat (usec): min=18, max=69146, avg=3075.56, stdev=5566.43 00:11:51.784 clat (msec): min=8, max=514, avg=198.62, stdev=45.29 00:11:51.784 lat (msec): min=8, max=514, avg=201.69, stdev=45.55 00:11:51.784 clat percentiles (msec): 00:11:51.784 | 1.00th=[ 56], 5.00th=[ 148], 10.00th=[ 155], 20.00th=[ 159], 00:11:51.784 | 30.00th=[ 165], 40.00th=[ 203], 50.00th=[ 213], 60.00th=[ 218], 00:11:51.784 | 70.00th=[ 220], 80.00th=[ 222], 90.00th=[ 228], 95.00th=[ 232], 00:11:51.784 | 99.00th=[ 376], 99.50th=[ 443], 99.90th=[ 498], 99.95th=[ 514], 00:11:51.784 | 99.99th=[ 514] 00:11:51.784 bw ( KiB/s): min=59904, max=102400, per=6.77%, avg=81545.85, stdev=13400.37, samples=20 00:11:51.784 iops : min= 234, max= 400, avg=318.50, stdev=52.29, samples=20 00:11:51.784 lat (msec) : 10=0.12%, 20=0.25%, 50=0.49%, 100=0.99%, 250=94.49% 00:11:51.784 lat (msec) : 500=3.60%, 750=0.06% 00:11:51.784 cpu : usr=0.51%, sys=1.06%, ctx=2200, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,3248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 job7: (groupid=0, jobs=1): err= 0: pid=67124: Mon Nov 18 18:10:09 2024 00:11:51.784 write: IOPS=479, BW=120MiB/s (126MB/s)(1227MiB/10241msec); 0 zone resets 00:11:51.784 slat (usec): min=18, max=21959, avg=2015.68, stdev=3565.69 00:11:51.784 clat (msec): min=6, max=508, avg=131.46, stdev=36.54 00:11:51.784 lat (msec): min=6, max=508, avg=133.48, stdev=36.73 00:11:51.784 clat percentiles (msec): 00:11:51.784 | 1.00th=[ 90], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 117], 00:11:51.784 | 30.00th=[ 122], 40.00th=[ 123], 50.00th=[ 123], 60.00th=[ 124], 00:11:51.784 | 70.00th=[ 125], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 157], 00:11:51.784 | 99.00th=[ 296], 99.50th=[ 405], 99.90th=[ 493], 99.95th=[ 493], 00:11:51.784 | 99.99th=[ 510] 00:11:51.784 bw ( KiB/s): min=67072, max=135168, per=10.29%, avg=123955.20, stdev=17853.20, samples=20 00:11:51.784 iops : min= 262, max= 528, avg=484.20, stdev=69.74, samples=20 00:11:51.784 lat (msec) : 10=0.02%, 20=0.16%, 50=0.41%, 100=0.59%, 250=96.66% 00:11:51.784 lat (msec) : 500=2.12%, 750=0.04% 00:11:51.784 cpu : usr=0.86%, sys=1.12%, ctx=5033, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,4906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 job8: (groupid=0, jobs=1): err= 0: pid=67125: Mon Nov 18 18:10:09 2024 00:11:51.784 write: IOPS=347, BW=86.9MiB/s (91.1MB/s)(890MiB/10241msec); 0 zone resets 00:11:51.784 slat (usec): min=18, max=38045, avg=2777.77, stdev=5059.63 00:11:51.784 clat (msec): min=39, max=507, avg=181.24, stdev=53.45 00:11:51.784 lat (msec): min=39, max=508, avg=184.02, stdev=53.98 00:11:51.784 clat percentiles (msec): 00:11:51.784 | 1.00th=[ 97], 5.00th=[ 115], 10.00th=[ 121], 20.00th=[ 123], 00:11:51.784 | 30.00th=[ 126], 40.00th=[ 163], 50.00th=[ 205], 60.00th=[ 213], 00:11:51.784 | 70.00th=[ 215], 80.00th=[ 218], 90.00th=[ 220], 95.00th=[ 222], 00:11:51.784 | 99.00th=[ 351], 99.50th=[ 439], 99.90th=[ 493], 99.95th=[ 510], 00:11:51.784 | 99.99th=[ 510] 00:11:51.784 bw ( KiB/s): min=59392, max=135168, per=7.43%, avg=89497.60, stdev=24522.95, samples=20 00:11:51.784 iops : min= 232, max= 528, avg=349.60, stdev=95.79, samples=20 00:11:51.784 lat (msec) : 50=0.22%, 100=0.81%, 250=95.53%, 500=3.37%, 750=0.06% 00:11:51.784 cpu : usr=0.61%, sys=1.05%, ctx=4345, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,3560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 job9: (groupid=0, jobs=1): err= 0: pid=67126: Mon Nov 18 18:10:09 2024 00:11:51.784 write: IOPS=324, BW=81.2MiB/s (85.1MB/s)(831MiB/10236msec); 0 zone resets 00:11:51.784 slat (usec): min=16, max=30808, avg=3005.00, stdev=5338.03 00:11:51.784 clat (msec): min=14, max=509, avg=193.97, stdev=46.69 00:11:51.784 lat (msec): min=14, max=509, avg=196.97, stdev=47.02 00:11:51.784 clat percentiles (msec): 00:11:51.784 | 1.00th=[ 60], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 153], 00:11:51.784 | 30.00th=[ 155], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 215], 00:11:51.784 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 226], 95.00th=[ 232], 00:11:51.784 | 99.00th=[ 368], 99.50th=[ 439], 99.90th=[ 493], 99.95th=[ 510], 00:11:51.784 | 99.99th=[ 510] 00:11:51.784 bw ( KiB/s): min=59904, max=112640, per=6.93%, avg=83481.60, stdev=15909.31, samples=20 00:11:51.784 iops : min= 234, max= 440, avg=326.10, stdev=62.15, samples=20 00:11:51.784 lat (msec) : 20=0.12%, 50=0.72%, 100=0.84%, 250=94.65%, 500=3.61% 00:11:51.784 lat (msec) : 750=0.06% 00:11:51.784 cpu : usr=0.66%, sys=0.96%, ctx=3091, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,3324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 job10: (groupid=0, jobs=1): err= 0: pid=67127: Mon Nov 18 18:10:09 2024 00:11:51.784 write: IOPS=329, BW=82.3MiB/s (86.3MB/s)(843MiB/10240msec); 0 zone resets 00:11:51.784 slat (usec): min=16, max=24742, avg=2883.75, stdev=5279.84 00:11:51.784 clat (msec): min=13, max=511, avg=191.47, stdev=49.97 00:11:51.784 lat (msec): min=13, max=511, avg=194.36, stdev=50.49 00:11:51.784 clat percentiles (msec): 00:11:51.784 | 1.00th=[ 46], 5.00th=[ 129], 10.00th=[ 146], 20.00th=[ 153], 00:11:51.784 | 30.00th=[ 157], 40.00th=[ 201], 50.00th=[ 211], 60.00th=[ 215], 00:11:51.784 | 70.00th=[ 218], 80.00th=[ 222], 90.00th=[ 226], 95.00th=[ 232], 00:11:51.784 | 99.00th=[ 372], 99.50th=[ 443], 99.90th=[ 493], 99.95th=[ 514], 00:11:51.784 | 99.99th=[ 514] 00:11:51.784 bw ( KiB/s): min=59904, max=128512, per=7.03%, avg=84659.20, stdev=18447.92, samples=20 00:11:51.784 iops : min= 234, max= 502, avg=330.70, stdev=72.06, samples=20 00:11:51.784 lat (msec) : 20=0.03%, 50=1.16%, 100=1.93%, 250=93.26%, 500=3.56% 00:11:51.784 lat (msec) : 750=0.06% 00:11:51.784 cpu : usr=0.65%, sys=1.05%, ctx=4014, majf=0, minf=1 00:11:51.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:11:51.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:51.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:11:51.784 issued rwts: total=0,3370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:51.784 latency : target=0, window=0, percentile=100.00%, depth=64 00:11:51.784 00:11:51.784 Run status group 0 (all jobs): 00:11:51.784 WRITE: bw=1176MiB/s (1234MB/s), 79.2MiB/s-162MiB/s (83.1MB/s-170MB/s), io=11.8GiB (12.6GB), run=10052-10243msec 00:11:51.784 00:11:51.784 Disk stats (read/write): 00:11:51.784 nvme0n1: ios=49/12852, merge=0/0, ticks=28/1212231, in_queue=1212259, util=97.51% 00:11:51.784 nvme10n1: ios=47/6464, merge=0/0, ticks=46/1234327, in_queue=1234373, util=97.86% 00:11:51.784 nvme1n1: ios=23/10331, merge=0/0, ticks=22/1214880, in_queue=1214902, util=97.80% 00:11:51.784 nvme2n1: ios=20/9807, merge=0/0, ticks=28/1237218, in_queue=1237246, util=98.13% 00:11:51.784 nvme3n1: ios=0/6589, merge=0/0, ticks=0/1234123, in_queue=1234123, util=97.88% 00:11:51.784 nvme4n1: ios=0/12925, merge=0/0, ticks=0/1214116, in_queue=1214116, util=98.19% 00:11:51.784 nvme5n1: ios=0/6487, merge=0/0, ticks=0/1236591, in_queue=1236591, util=98.43% 00:11:51.784 nvme6n1: ios=0/9796, merge=0/0, ticks=0/1236114, in_queue=1236114, util=98.43% 00:11:51.784 nvme7n1: ios=0/7106, merge=0/0, ticks=0/1236818, in_queue=1236818, util=98.71% 00:11:51.784 nvme8n1: ios=0/6634, merge=0/0, ticks=0/1235723, in_queue=1235723, util=98.77% 00:11:51.784 nvme9n1: ios=0/6729, merge=0/0, ticks=0/1237933, in_queue=1237933, util=98.93% 00:11:51.784 18:10:09 -- target/multiconnection.sh@36 -- # sync 00:11:51.784 18:10:09 -- target/multiconnection.sh@37 -- # seq 1 11 00:11:51.784 18:10:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.784 18:10:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.784 18:10:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:11:51.784 18:10:09 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.784 18:10:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.784 18:10:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:11:51.784 18:10:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.784 18:10:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:11:51.785 18:10:09 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.785 18:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:11:51.785 18:10:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:11:51.785 18:10:09 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:11:51.785 18:10:09 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:51.785 18:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:11:51.785 18:10:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:11:51.785 18:10:09 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:11:51.785 18:10:09 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:51.785 18:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:11:51.785 18:10:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:11:51.785 18:10:09 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:11:51.785 18:10:09 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:51.785 18:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:11:51.785 18:10:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:11:51.785 18:10:09 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:11:51.785 18:10:09 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:11:51.785 18:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:09 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:11:51.785 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:11:51.785 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:11:51.785 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:11:51.785 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:11:51.785 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:11:51.785 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:11:51.785 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:11:51.785 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:11:51.785 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:11:51.785 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:11:51.785 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:11:51.785 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:11:51.785 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:51.785 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:11:51.785 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.785 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.785 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.785 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.785 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:11:51.785 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:11:51.785 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:11:51.785 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:51.785 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:11:52.088 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:52.088 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:11:52.088 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:52.088 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:11:52.088 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.088 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:52.088 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.088 18:10:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:52.088 18:10:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:11:52.088 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:11:52.088 18:10:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:11:52.088 18:10:10 -- common/autotest_common.sh@1208 -- # local i=0 00:11:52.088 18:10:10 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:52.088 18:10:10 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:11:52.088 18:10:10 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:52.088 18:10:10 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:11:52.088 18:10:10 -- common/autotest_common.sh@1220 -- # return 0 00:11:52.088 18:10:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:11:52.088 18:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.088 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:52.088 18:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.088 18:10:10 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:11:52.088 18:10:10 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:52.088 18:10:10 -- target/multiconnection.sh@47 -- # nvmftestfini 00:11:52.089 18:10:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:52.089 18:10:10 -- nvmf/common.sh@116 -- # sync 00:11:52.089 18:10:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:52.089 18:10:10 -- nvmf/common.sh@119 -- # set +e 00:11:52.089 18:10:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:52.089 18:10:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:52.089 rmmod nvme_tcp 00:11:52.089 rmmod nvme_fabrics 00:11:52.089 rmmod nvme_keyring 00:11:52.089 18:10:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:52.089 18:10:10 -- nvmf/common.sh@123 -- # set -e 00:11:52.089 18:10:10 -- nvmf/common.sh@124 -- # return 0 00:11:52.089 18:10:10 -- nvmf/common.sh@477 -- # '[' -n 66430 ']' 00:11:52.089 18:10:10 -- nvmf/common.sh@478 -- # killprocess 66430 00:11:52.089 18:10:10 -- common/autotest_common.sh@936 -- # '[' -z 66430 ']' 00:11:52.089 18:10:10 -- common/autotest_common.sh@940 -- # kill -0 66430 00:11:52.089 18:10:10 -- common/autotest_common.sh@941 -- # uname 00:11:52.089 18:10:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.089 18:10:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66430 00:11:52.089 killing process with pid 66430 00:11:52.089 18:10:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.089 18:10:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.089 18:10:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66430' 00:11:52.089 18:10:10 -- common/autotest_common.sh@955 -- # kill 66430 00:11:52.089 18:10:10 -- common/autotest_common.sh@960 -- # wait 66430 00:11:52.348 18:10:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:52.348 18:10:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:52.348 18:10:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:52.348 18:10:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.348 18:10:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:52.348 18:10:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.348 18:10:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.348 18:10:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.348 18:10:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:52.348 ************************************ 00:11:52.348 END TEST nvmf_multiconnection 00:11:52.348 ************************************ 00:11:52.348 00:11:52.348 real 0m49.044s 00:11:52.348 user 2m41.723s 00:11:52.348 sys 0m33.848s 00:11:52.348 18:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.348 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:52.608 18:10:10 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:52.608 18:10:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.608 18:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.608 18:10:10 -- common/autotest_common.sh@10 -- # set +x 00:11:52.608 ************************************ 00:11:52.608 START TEST nvmf_initiator_timeout 00:11:52.608 ************************************ 00:11:52.608 18:10:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:11:52.608 * Looking for test storage... 00:11:52.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.608 18:10:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:52.608 18:10:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:52.608 18:10:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:52.608 18:10:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:52.608 18:10:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:52.608 18:10:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:52.608 18:10:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:52.608 18:10:11 -- scripts/common.sh@335 -- # IFS=.-: 00:11:52.608 18:10:11 -- scripts/common.sh@335 -- # read -ra ver1 00:11:52.608 18:10:11 -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.608 18:10:11 -- scripts/common.sh@336 -- # read -ra ver2 00:11:52.608 18:10:11 -- scripts/common.sh@337 -- # local 'op=<' 00:11:52.608 18:10:11 -- scripts/common.sh@339 -- # ver1_l=2 00:11:52.608 18:10:11 -- scripts/common.sh@340 -- # ver2_l=1 00:11:52.608 18:10:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:52.608 18:10:11 -- scripts/common.sh@343 -- # case "$op" in 00:11:52.608 18:10:11 -- scripts/common.sh@344 -- # : 1 00:11:52.608 18:10:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:52.608 18:10:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.608 18:10:11 -- scripts/common.sh@364 -- # decimal 1 00:11:52.608 18:10:11 -- scripts/common.sh@352 -- # local d=1 00:11:52.608 18:10:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.608 18:10:11 -- scripts/common.sh@354 -- # echo 1 00:11:52.608 18:10:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:52.608 18:10:11 -- scripts/common.sh@365 -- # decimal 2 00:11:52.608 18:10:11 -- scripts/common.sh@352 -- # local d=2 00:11:52.608 18:10:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.608 18:10:11 -- scripts/common.sh@354 -- # echo 2 00:11:52.608 18:10:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:52.608 18:10:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:52.608 18:10:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:52.608 18:10:11 -- scripts/common.sh@367 -- # return 0 00:11:52.608 18:10:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.608 18:10:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:52.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.608 --rc genhtml_branch_coverage=1 00:11:52.608 --rc genhtml_function_coverage=1 00:11:52.608 --rc genhtml_legend=1 00:11:52.609 --rc geninfo_all_blocks=1 00:11:52.609 --rc geninfo_unexecuted_blocks=1 00:11:52.609 00:11:52.609 ' 00:11:52.609 18:10:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:52.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.609 --rc genhtml_branch_coverage=1 00:11:52.609 --rc genhtml_function_coverage=1 00:11:52.609 --rc genhtml_legend=1 00:11:52.609 --rc geninfo_all_blocks=1 00:11:52.609 --rc geninfo_unexecuted_blocks=1 00:11:52.609 00:11:52.609 ' 00:11:52.609 18:10:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:52.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.609 --rc genhtml_branch_coverage=1 00:11:52.609 --rc genhtml_function_coverage=1 00:11:52.609 --rc genhtml_legend=1 00:11:52.609 --rc geninfo_all_blocks=1 00:11:52.609 --rc geninfo_unexecuted_blocks=1 00:11:52.609 00:11:52.609 ' 00:11:52.609 18:10:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:52.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.609 --rc genhtml_branch_coverage=1 00:11:52.609 --rc genhtml_function_coverage=1 00:11:52.609 --rc genhtml_legend=1 00:11:52.609 --rc geninfo_all_blocks=1 00:11:52.609 --rc geninfo_unexecuted_blocks=1 00:11:52.609 00:11:52.609 ' 00:11:52.609 18:10:11 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.609 18:10:11 -- nvmf/common.sh@7 -- # uname -s 00:11:52.609 18:10:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.609 18:10:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.609 18:10:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.609 18:10:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.609 18:10:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.609 18:10:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.609 18:10:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.609 18:10:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.609 18:10:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.609 18:10:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:52.609 18:10:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:11:52.609 18:10:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.609 18:10:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.609 18:10:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.609 18:10:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.609 18:10:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.609 18:10:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.609 18:10:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.609 18:10:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.609 18:10:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.609 18:10:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.609 18:10:11 -- paths/export.sh@5 -- # export PATH 00:11:52.609 18:10:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.609 18:10:11 -- nvmf/common.sh@46 -- # : 0 00:11:52.609 18:10:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.609 18:10:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.609 18:10:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.609 18:10:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.609 18:10:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.609 18:10:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.609 18:10:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.609 18:10:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:52.609 18:10:11 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.609 18:10:11 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.609 18:10:11 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:11:52.609 18:10:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:52.609 18:10:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.609 18:10:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:52.609 18:10:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:52.609 18:10:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:52.609 18:10:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.609 18:10:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.609 18:10:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.609 18:10:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:52.609 18:10:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:52.609 18:10:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.609 18:10:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.609 18:10:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.609 18:10:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:52.609 18:10:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.609 18:10:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.609 18:10:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.609 18:10:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.609 18:10:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.609 18:10:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.609 18:10:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.609 18:10:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.609 18:10:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:52.867 18:10:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:52.867 Cannot find device "nvmf_tgt_br" 00:11:52.867 18:10:11 -- nvmf/common.sh@154 -- # true 00:11:52.867 18:10:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.867 Cannot find device "nvmf_tgt_br2" 00:11:52.867 18:10:11 -- nvmf/common.sh@155 -- # true 00:11:52.867 18:10:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:52.867 18:10:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:52.867 Cannot find device "nvmf_tgt_br" 00:11:52.867 18:10:11 -- nvmf/common.sh@157 -- # true 00:11:52.867 18:10:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:52.867 Cannot find device "nvmf_tgt_br2" 00:11:52.867 18:10:11 -- nvmf/common.sh@158 -- # true 00:11:52.868 18:10:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:52.868 18:10:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:52.868 18:10:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.868 18:10:11 -- nvmf/common.sh@161 -- # true 00:11:52.868 18:10:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.868 18:10:11 -- nvmf/common.sh@162 -- # true 00:11:52.868 18:10:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.868 18:10:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.868 18:10:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.868 18:10:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.868 18:10:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.868 18:10:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.868 18:10:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.868 18:10:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.868 18:10:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.868 18:10:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:52.868 18:10:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:52.868 18:10:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:52.868 18:10:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:52.868 18:10:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.868 18:10:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.868 18:10:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.868 18:10:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:52.868 18:10:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:52.868 18:10:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.126 18:10:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.126 18:10:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.126 18:10:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.126 18:10:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.126 18:10:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:53.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:11:53.126 00:11:53.126 --- 10.0.0.2 ping statistics --- 00:11:53.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.126 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:53.126 18:10:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:53.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:53.126 00:11:53.126 --- 10.0.0.3 ping statistics --- 00:11:53.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.126 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:53.126 18:10:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:53.126 00:11:53.126 --- 10.0.0.1 ping statistics --- 00:11:53.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.126 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:53.126 18:10:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.126 18:10:11 -- nvmf/common.sh@421 -- # return 0 00:11:53.126 18:10:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:53.126 18:10:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.126 18:10:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:53.126 18:10:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:53.126 18:10:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.126 18:10:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:53.126 18:10:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:53.126 18:10:11 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:11:53.126 18:10:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:53.126 18:10:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.126 18:10:11 -- common/autotest_common.sh@10 -- # set +x 00:11:53.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.126 18:10:11 -- nvmf/common.sh@469 -- # nvmfpid=67498 00:11:53.126 18:10:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.126 18:10:11 -- nvmf/common.sh@470 -- # waitforlisten 67498 00:11:53.126 18:10:11 -- common/autotest_common.sh@829 -- # '[' -z 67498 ']' 00:11:53.126 18:10:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.126 18:10:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.126 18:10:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.126 18:10:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.126 18:10:11 -- common/autotest_common.sh@10 -- # set +x 00:11:53.126 [2024-11-18 18:10:11.603636] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:53.126 [2024-11-18 18:10:11.603949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.390 [2024-11-18 18:10:11.741524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.390 [2024-11-18 18:10:11.797414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:53.390 [2024-11-18 18:10:11.797798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.390 [2024-11-18 18:10:11.797864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.390 [2024-11-18 18:10:11.798125] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.390 [2024-11-18 18:10:11.798346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.390 [2024-11-18 18:10:11.798405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.390 [2024-11-18 18:10:11.798576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.390 [2024-11-18 18:10:11.798576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.327 18:10:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.327 18:10:12 -- common/autotest_common.sh@862 -- # return 0 00:11:54.327 18:10:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:54.327 18:10:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:54.327 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.327 18:10:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.327 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.327 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.327 Malloc0 00:11:54.327 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:11:54.327 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.327 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.327 Delay0 00:11:54.327 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.327 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.327 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.327 [2024-11-18 18:10:12.688450] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.327 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.327 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.327 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.327 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.327 18:10:12 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.327 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.328 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.328 18:10:12 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.328 18:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.328 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 [2024-11-18 18:10:12.716635] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.328 18:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.328 18:10:12 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.328 18:10:12 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.328 18:10:12 -- common/autotest_common.sh@1187 -- # local i=0 00:11:54.328 18:10:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.328 18:10:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:54.328 18:10:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:56.862 18:10:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:56.862 18:10:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:56.862 18:10:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.862 18:10:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:56.862 18:10:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.862 18:10:14 -- common/autotest_common.sh@1197 -- # return 0 00:11:56.862 18:10:14 -- target/initiator_timeout.sh@35 -- # fio_pid=67562 00:11:56.862 18:10:14 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:11:56.862 18:10:14 -- target/initiator_timeout.sh@37 -- # sleep 3 00:11:56.862 [global] 00:11:56.862 thread=1 00:11:56.862 invalidate=1 00:11:56.862 rw=write 00:11:56.862 time_based=1 00:11:56.862 runtime=60 00:11:56.862 ioengine=libaio 00:11:56.862 direct=1 00:11:56.862 bs=4096 00:11:56.862 iodepth=1 00:11:56.862 norandommap=0 00:11:56.862 numjobs=1 00:11:56.862 00:11:56.862 verify_dump=1 00:11:56.862 verify_backlog=512 00:11:56.862 verify_state_save=0 00:11:56.862 do_verify=1 00:11:56.862 verify=crc32c-intel 00:11:56.862 [job0] 00:11:56.862 filename=/dev/nvme0n1 00:11:56.862 Could not set queue depth (nvme0n1) 00:11:56.862 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:56.862 fio-3.35 00:11:56.862 Starting 1 thread 00:11:59.394 18:10:17 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:11:59.394 18:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.394 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:11:59.394 true 00:11:59.394 18:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.394 18:10:17 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:11:59.394 18:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.394 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:11:59.394 true 00:11:59.394 18:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.394 18:10:17 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:11:59.394 18:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.394 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:11:59.394 true 00:11:59.394 18:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.394 18:10:17 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:11:59.394 18:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.394 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:11:59.394 true 00:11:59.394 18:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.394 18:10:17 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:02.692 18:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.692 18:10:20 -- common/autotest_common.sh@10 -- # set +x 00:12:02.692 true 00:12:02.692 18:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:02.692 18:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.692 18:10:20 -- common/autotest_common.sh@10 -- # set +x 00:12:02.692 true 00:12:02.692 18:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:02.692 18:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.692 18:10:20 -- common/autotest_common.sh@10 -- # set +x 00:12:02.692 true 00:12:02.692 18:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:02.692 18:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.692 18:10:20 -- common/autotest_common.sh@10 -- # set +x 00:12:02.692 true 00:12:02.692 18:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:02.692 18:10:20 -- target/initiator_timeout.sh@54 -- # wait 67562 00:12:58.922 00:12:58.922 job0: (groupid=0, jobs=1): err= 0: pid=67589: Mon Nov 18 18:11:15 2024 00:12:58.922 read: IOPS=791, BW=3168KiB/s (3244kB/s)(186MiB/60000msec) 00:12:58.922 slat (usec): min=10, max=8861, avg=14.23, stdev=51.86 00:12:58.922 clat (usec): min=151, max=40717k, avg=1061.37, stdev=186789.86 00:12:58.922 lat (usec): min=162, max=40717k, avg=1075.60, stdev=186789.86 00:12:58.922 clat percentiles (usec): 00:12:58.922 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 186], 00:12:58.922 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:12:58.922 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:12:58.922 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 453], 00:12:58.922 | 99.99th=[ 914] 00:12:58.922 write: IOPS=793, BW=3174KiB/s (3251kB/s)(186MiB/60000msec); 0 zone resets 00:12:58.922 slat (usec): min=13, max=587, avg=21.53, stdev= 6.87 00:12:58.922 clat (usec): min=113, max=730, avg=161.82, stdev=21.77 00:12:58.922 lat (usec): min=134, max=749, avg=183.35, stdev=22.92 00:12:58.922 clat percentiles (usec): 00:12:58.922 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 145], 00:12:58.922 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:12:58.922 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 198], 00:12:58.922 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 255], 99.95th=[ 265], 00:12:58.922 | 99.99th=[ 635] 00:12:58.922 bw ( KiB/s): min= 4296, max=12128, per=100.00%, avg=9806.00, stdev=1568.11, samples=38 00:12:58.922 iops : min= 1074, max= 3032, avg=2451.47, stdev=392.05, samples=38 00:12:58.922 lat (usec) : 250=98.29%, 500=1.68%, 750=0.02%, 1000=0.01% 00:12:58.922 lat (msec) : 2=0.01%, >=2000=0.01% 00:12:58.922 cpu : usr=0.62%, sys=2.15%, ctx=95148, majf=0, minf=5 00:12:58.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.922 issued rwts: total=47516,47616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.922 00:12:58.922 Run status group 0 (all jobs): 00:12:58.922 READ: bw=3168KiB/s (3244kB/s), 3168KiB/s-3168KiB/s (3244kB/s-3244kB/s), io=186MiB (195MB), run=60000-60000msec 00:12:58.922 WRITE: bw=3174KiB/s (3251kB/s), 3174KiB/s-3174KiB/s (3251kB/s-3251kB/s), io=186MiB (195MB), run=60000-60000msec 00:12:58.922 00:12:58.922 Disk stats (read/write): 00:12:58.922 nvme0n1: ios=47364/47562, merge=0/0, ticks=9939/8206, in_queue=18145, util=99.83% 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.922 18:11:15 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.922 18:11:15 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.922 18:11:15 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.922 18:11:15 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.922 18:11:15 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.922 18:11:15 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:12:58.922 nvmf hotplug test: fio successful as expected 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.922 18:11:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.922 18:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:58.922 18:11:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:12:58.922 18:11:15 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:12:58.922 18:11:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.922 18:11:15 -- nvmf/common.sh@116 -- # sync 00:12:58.922 18:11:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.922 18:11:15 -- nvmf/common.sh@119 -- # set +e 00:12:58.922 18:11:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.922 18:11:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.922 rmmod nvme_tcp 00:12:58.922 rmmod nvme_fabrics 00:12:58.922 rmmod nvme_keyring 00:12:58.922 18:11:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.922 18:11:15 -- nvmf/common.sh@123 -- # set -e 00:12:58.922 18:11:15 -- nvmf/common.sh@124 -- # return 0 00:12:58.922 18:11:15 -- nvmf/common.sh@477 -- # '[' -n 67498 ']' 00:12:58.922 18:11:15 -- nvmf/common.sh@478 -- # killprocess 67498 00:12:58.922 18:11:15 -- common/autotest_common.sh@936 -- # '[' -z 67498 ']' 00:12:58.922 18:11:15 -- common/autotest_common.sh@940 -- # kill -0 67498 00:12:58.922 18:11:15 -- common/autotest_common.sh@941 -- # uname 00:12:58.922 18:11:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.922 18:11:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67498 00:12:58.922 18:11:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.922 18:11:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.922 18:11:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67498' 00:12:58.922 killing process with pid 67498 00:12:58.922 18:11:15 -- common/autotest_common.sh@955 -- # kill 67498 00:12:58.922 18:11:15 -- common/autotest_common.sh@960 -- # wait 67498 00:12:58.922 18:11:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.922 18:11:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.922 18:11:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.922 18:11:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.922 18:11:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.922 18:11:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.922 18:11:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.922 18:11:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.922 18:11:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:58.922 00:12:58.922 real 1m4.604s 00:12:58.922 user 3m54.157s 00:12:58.922 sys 0m21.200s 00:12:58.922 18:11:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.922 18:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:58.922 ************************************ 00:12:58.922 END TEST nvmf_initiator_timeout 00:12:58.922 ************************************ 00:12:58.922 18:11:15 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:12:58.922 18:11:15 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:12:58.922 18:11:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.922 18:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 18:11:15 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:12:58.923 18:11:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.923 18:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 18:11:15 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:12:58.923 18:11:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:12:58.923 18:11:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.923 18:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:58.923 ************************************ 00:12:58.923 START TEST nvmf_identify 00:12:58.923 ************************************ 00:12:58.923 18:11:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:12:58.923 * Looking for test storage... 00:12:58.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:58.923 18:11:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.923 18:11:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.923 18:11:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:58.923 18:11:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:58.923 18:11:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:58.923 18:11:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:58.923 18:11:15 -- scripts/common.sh@335 -- # IFS=.-: 00:12:58.923 18:11:15 -- scripts/common.sh@335 -- # read -ra ver1 00:12:58.923 18:11:15 -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.923 18:11:15 -- scripts/common.sh@336 -- # read -ra ver2 00:12:58.923 18:11:15 -- scripts/common.sh@337 -- # local 'op=<' 00:12:58.923 18:11:15 -- scripts/common.sh@339 -- # ver1_l=2 00:12:58.923 18:11:15 -- scripts/common.sh@340 -- # ver2_l=1 00:12:58.923 18:11:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:58.923 18:11:15 -- scripts/common.sh@343 -- # case "$op" in 00:12:58.923 18:11:15 -- scripts/common.sh@344 -- # : 1 00:12:58.923 18:11:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:58.923 18:11:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.923 18:11:15 -- scripts/common.sh@364 -- # decimal 1 00:12:58.923 18:11:15 -- scripts/common.sh@352 -- # local d=1 00:12:58.923 18:11:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.923 18:11:15 -- scripts/common.sh@354 -- # echo 1 00:12:58.923 18:11:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:58.923 18:11:15 -- scripts/common.sh@365 -- # decimal 2 00:12:58.923 18:11:15 -- scripts/common.sh@352 -- # local d=2 00:12:58.923 18:11:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.923 18:11:15 -- scripts/common.sh@354 -- # echo 2 00:12:58.923 18:11:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:58.923 18:11:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:58.923 18:11:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:58.923 18:11:15 -- scripts/common.sh@367 -- # return 0 00:12:58.923 18:11:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:58.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.923 --rc genhtml_branch_coverage=1 00:12:58.923 --rc genhtml_function_coverage=1 00:12:58.923 --rc genhtml_legend=1 00:12:58.923 --rc geninfo_all_blocks=1 00:12:58.923 --rc geninfo_unexecuted_blocks=1 00:12:58.923 00:12:58.923 ' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:58.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.923 --rc genhtml_branch_coverage=1 00:12:58.923 --rc genhtml_function_coverage=1 00:12:58.923 --rc genhtml_legend=1 00:12:58.923 --rc geninfo_all_blocks=1 00:12:58.923 --rc geninfo_unexecuted_blocks=1 00:12:58.923 00:12:58.923 ' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:58.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.923 --rc genhtml_branch_coverage=1 00:12:58.923 --rc genhtml_function_coverage=1 00:12:58.923 --rc genhtml_legend=1 00:12:58.923 --rc geninfo_all_blocks=1 00:12:58.923 --rc geninfo_unexecuted_blocks=1 00:12:58.923 00:12:58.923 ' 00:12:58.923 18:11:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:58.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.923 --rc genhtml_branch_coverage=1 00:12:58.923 --rc genhtml_function_coverage=1 00:12:58.923 --rc genhtml_legend=1 00:12:58.923 --rc geninfo_all_blocks=1 00:12:58.923 --rc geninfo_unexecuted_blocks=1 00:12:58.923 00:12:58.923 ' 00:12:58.923 18:11:15 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.923 18:11:15 -- nvmf/common.sh@7 -- # uname -s 00:12:58.923 18:11:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.923 18:11:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.923 18:11:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.923 18:11:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.923 18:11:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.923 18:11:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.923 18:11:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.923 18:11:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.923 18:11:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.923 18:11:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:12:58.923 18:11:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:12:58.923 18:11:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.923 18:11:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.923 18:11:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.923 18:11:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.923 18:11:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.923 18:11:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.923 18:11:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.923 18:11:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.923 18:11:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.923 18:11:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.923 18:11:15 -- paths/export.sh@5 -- # export PATH 00:12:58.923 18:11:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.923 18:11:15 -- nvmf/common.sh@46 -- # : 0 00:12:58.923 18:11:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.923 18:11:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.923 18:11:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.923 18:11:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.923 18:11:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.923 18:11:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.923 18:11:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.923 18:11:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.923 18:11:15 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.923 18:11:15 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.923 18:11:15 -- host/identify.sh@14 -- # nvmftestinit 00:12:58.923 18:11:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.923 18:11:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.923 18:11:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.923 18:11:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.923 18:11:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.923 18:11:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.923 18:11:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.923 18:11:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.923 18:11:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:58.923 18:11:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:58.923 18:11:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.923 18:11:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.923 18:11:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:58.923 18:11:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:58.923 18:11:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.923 18:11:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.923 18:11:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.923 18:11:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.924 18:11:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.924 18:11:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.924 18:11:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.924 18:11:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.924 18:11:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:58.924 18:11:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:58.924 Cannot find device "nvmf_tgt_br" 00:12:58.924 18:11:15 -- nvmf/common.sh@154 -- # true 00:12:58.924 18:11:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.924 Cannot find device "nvmf_tgt_br2" 00:12:58.924 18:11:15 -- nvmf/common.sh@155 -- # true 00:12:58.924 18:11:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:58.924 18:11:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:58.924 Cannot find device "nvmf_tgt_br" 00:12:58.924 18:11:15 -- nvmf/common.sh@157 -- # true 00:12:58.924 18:11:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:58.924 Cannot find device "nvmf_tgt_br2" 00:12:58.924 18:11:15 -- nvmf/common.sh@158 -- # true 00:12:58.924 18:11:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:58.924 18:11:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:58.924 18:11:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.924 18:11:16 -- nvmf/common.sh@161 -- # true 00:12:58.924 18:11:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.924 18:11:16 -- nvmf/common.sh@162 -- # true 00:12:58.924 18:11:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.924 18:11:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.924 18:11:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.924 18:11:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.924 18:11:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.924 18:11:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.924 18:11:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.924 18:11:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:58.924 18:11:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:58.924 18:11:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:58.924 18:11:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:58.924 18:11:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:58.924 18:11:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:58.924 18:11:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.924 18:11:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.924 18:11:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.924 18:11:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:58.924 18:11:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:58.924 18:11:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.924 18:11:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.924 18:11:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.924 18:11:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.924 18:11:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.924 18:11:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:58.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:58.924 00:12:58.924 --- 10.0.0.2 ping statistics --- 00:12:58.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.924 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:58.924 18:11:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:58.924 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.924 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:58.924 00:12:58.924 --- 10.0.0.3 ping statistics --- 00:12:58.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.924 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:58.924 18:11:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:58.924 00:12:58.924 --- 10.0.0.1 ping statistics --- 00:12:58.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.924 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:58.924 18:11:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.924 18:11:16 -- nvmf/common.sh@421 -- # return 0 00:12:58.924 18:11:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:58.924 18:11:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.924 18:11:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:58.924 18:11:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:58.924 18:11:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.924 18:11:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:58.924 18:11:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:58.924 18:11:16 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:12:58.924 18:11:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.924 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 18:11:16 -- host/identify.sh@19 -- # nvmfpid=68440 00:12:58.924 18:11:16 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.924 18:11:16 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:58.924 18:11:16 -- host/identify.sh@23 -- # waitforlisten 68440 00:12:58.924 18:11:16 -- common/autotest_common.sh@829 -- # '[' -z 68440 ']' 00:12:58.924 18:11:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.924 18:11:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.924 18:11:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.924 18:11:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.924 18:11:16 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 [2024-11-18 18:11:16.282486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:58.924 [2024-11-18 18:11:16.282606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.924 [2024-11-18 18:11:16.422005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.924 [2024-11-18 18:11:16.473064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.924 [2024-11-18 18:11:16.473249] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.924 [2024-11-18 18:11:16.473261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.924 [2024-11-18 18:11:16.473269] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.924 [2024-11-18 18:11:16.473799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.924 [2024-11-18 18:11:16.473858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.924 [2024-11-18 18:11:16.474092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.924 [2024-11-18 18:11:16.474097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.924 18:11:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.924 18:11:17 -- common/autotest_common.sh@862 -- # return 0 00:12:58.924 18:11:17 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 [2024-11-18 18:11:17.234116] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:12:58.924 18:11:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 18:11:17 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 Malloc0 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 [2024-11-18 18:11:17.329414] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.924 18:11:17 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:12:58.924 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.924 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.924 [2024-11-18 18:11:17.345180] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:58.924 [ 00:12:58.924 { 00:12:58.925 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:58.925 "subtype": "Discovery", 00:12:58.925 "listen_addresses": [ 00:12:58.925 { 00:12:58.925 "transport": "TCP", 00:12:58.925 "trtype": "TCP", 00:12:58.925 "adrfam": "IPv4", 00:12:58.925 "traddr": "10.0.0.2", 00:12:58.925 "trsvcid": "4420" 00:12:58.925 } 00:12:58.925 ], 00:12:58.925 "allow_any_host": true, 00:12:58.925 "hosts": [] 00:12:58.925 }, 00:12:58.925 { 00:12:58.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.925 "subtype": "NVMe", 00:12:58.925 "listen_addresses": [ 00:12:58.925 { 00:12:58.925 "transport": "TCP", 00:12:58.925 "trtype": "TCP", 00:12:58.925 "adrfam": "IPv4", 00:12:58.925 "traddr": "10.0.0.2", 00:12:58.925 "trsvcid": "4420" 00:12:58.925 } 00:12:58.925 ], 00:12:58.925 "allow_any_host": true, 00:12:58.925 "hosts": [], 00:12:58.925 "serial_number": "SPDK00000000000001", 00:12:58.925 "model_number": "SPDK bdev Controller", 00:12:58.925 "max_namespaces": 32, 00:12:58.925 "min_cntlid": 1, 00:12:58.925 "max_cntlid": 65519, 00:12:58.925 "namespaces": [ 00:12:58.925 { 00:12:58.925 "nsid": 1, 00:12:58.925 "bdev_name": "Malloc0", 00:12:58.925 "name": "Malloc0", 00:12:58.925 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:12:58.925 "eui64": "ABCDEF0123456789", 00:12:58.925 "uuid": "600ae5c9-fee2-450e-86a4-2fdb4c858f5b" 00:12:58.925 } 00:12:58.925 ] 00:12:58.925 } 00:12:58.925 ] 00:12:58.925 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.925 18:11:17 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:12:58.925 [2024-11-18 18:11:17.381369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:58.925 [2024-11-18 18:11:17.381428] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68475 ] 00:12:58.925 [2024-11-18 18:11:17.512811] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:12:58.925 [2024-11-18 18:11:17.512921] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:58.925 [2024-11-18 18:11:17.512929] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:58.925 [2024-11-18 18:11:17.512944] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:58.925 [2024-11-18 18:11:17.512958] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:58.925 [2024-11-18 18:11:17.513141] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:12:58.925 [2024-11-18 18:11:17.513193] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22d5d30 0 00:12:58.925 [2024-11-18 18:11:17.517639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:58.925 [2024-11-18 18:11:17.517664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:58.925 [2024-11-18 18:11:17.517687] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:58.925 [2024-11-18 18:11:17.517691] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:58.925 [2024-11-18 18:11:17.517736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:58.925 [2024-11-18 18:11:17.517744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:58.925 [2024-11-18 18:11:17.517749] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:58.925 [2024-11-18 18:11:17.517764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:58.925 [2024-11-18 18:11:17.517794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.193 [2024-11-18 18:11:17.525562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.193 [2024-11-18 18:11:17.525584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.193 [2024-11-18 18:11:17.525606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.193 [2024-11-18 18:11:17.525627] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:59.193 [2024-11-18 18:11:17.525636] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:12:59.193 [2024-11-18 18:11:17.525642] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:12:59.193 [2024-11-18 18:11:17.525658] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.193 [2024-11-18 18:11:17.525677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.193 [2024-11-18 18:11:17.525703] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.193 [2024-11-18 18:11:17.525757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.193 [2024-11-18 18:11:17.525764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.193 [2024-11-18 18:11:17.525768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525772] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.193 [2024-11-18 18:11:17.525779] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:12:59.193 [2024-11-18 18:11:17.525786] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:12:59.193 [2024-11-18 18:11:17.525794] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.193 [2024-11-18 18:11:17.525810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.193 [2024-11-18 18:11:17.525827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.193 [2024-11-18 18:11:17.525892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.193 [2024-11-18 18:11:17.525899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.193 [2024-11-18 18:11:17.525903] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.193 [2024-11-18 18:11:17.525914] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:12:59.193 [2024-11-18 18:11:17.525922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:12:59.193 [2024-11-18 18:11:17.525930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.193 [2024-11-18 18:11:17.525938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.193 [2024-11-18 18:11:17.525946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.193 [2024-11-18 18:11:17.525989] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.193 [2024-11-18 18:11:17.526038] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526049] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:59.194 [2024-11-18 18:11:17.526071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.194 [2024-11-18 18:11:17.526105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.194 [2024-11-18 18:11:17.526152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526159] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526163] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526173] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:12:59.194 [2024-11-18 18:11:17.526179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:12:59.194 [2024-11-18 18:11:17.526187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:59.194 [2024-11-18 18:11:17.526293] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:12:59.194 [2024-11-18 18:11:17.526299] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:59.194 [2024-11-18 18:11:17.526308] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526313] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.194 [2024-11-18 18:11:17.526357] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.194 [2024-11-18 18:11:17.526407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526417] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526421] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526427] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:59.194 [2024-11-18 18:11:17.526437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.194 [2024-11-18 18:11:17.526468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.194 [2024-11-18 18:11:17.526515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526522] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526535] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:59.194 [2024-11-18 18:11:17.526540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.526548] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:12:59.194 [2024-11-18 18:11:17.526577] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.526591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.194 [2024-11-18 18:11:17.526628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.194 [2024-11-18 18:11:17.526712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.194 [2024-11-18 18:11:17.526719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.194 [2024-11-18 18:11:17.526723] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526728] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d5d30): datao=0, datal=4096, cccid=0 00:12:59.194 [2024-11-18 18:11:17.526732] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2333f30) on tqpair(0x22d5d30): expected_datao=0, payload_size=4096 00:12:59.194 [2024-11-18 18:11:17.526741] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526746] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526755] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526779] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:12:59.194 [2024-11-18 18:11:17.526784] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:12:59.194 [2024-11-18 18:11:17.526789] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:12:59.194 [2024-11-18 18:11:17.526794] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:12:59.194 [2024-11-18 18:11:17.526799] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:12:59.194 [2024-11-18 18:11:17.526804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.526817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.526825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.194 [2024-11-18 18:11:17.526861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.194 [2024-11-18 18:11:17.526917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.194 [2024-11-18 18:11:17.526924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.194 [2024-11-18 18:11:17.526928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2333f30) on tqpair=0x22d5d30 00:12:59.194 [2024-11-18 18:11:17.526941] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526945] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.194 [2024-11-18 18:11:17.526962] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526969] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.194 [2024-11-18 18:11:17.526982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.526989] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.526995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.194 [2024-11-18 18:11:17.527001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.527005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.527009] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.527015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.194 [2024-11-18 18:11:17.527020] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.527032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:59.194 [2024-11-18 18:11:17.527039] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.527043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.194 [2024-11-18 18:11:17.527047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d5d30) 00:12:59.194 [2024-11-18 18:11:17.527054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.195 [2024-11-18 18:11:17.527073] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2333f30, cid 0, qid 0 00:12:59.195 [2024-11-18 18:11:17.527080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334090, cid 1, qid 0 00:12:59.195 [2024-11-18 18:11:17.527085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23341f0, cid 2, qid 0 00:12:59.195 [2024-11-18 18:11:17.527089] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.195 [2024-11-18 18:11:17.527094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23344b0, cid 4, qid 0 00:12:59.195 [2024-11-18 18:11:17.527179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.195 [2024-11-18 18:11:17.527186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.195 [2024-11-18 18:11:17.527190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23344b0) on tqpair=0x22d5d30 00:12:59.195 [2024-11-18 18:11:17.527200] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:12:59.195 [2024-11-18 18:11:17.527206] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:12:59.195 [2024-11-18 18:11:17.527217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d5d30) 00:12:59.195 [2024-11-18 18:11:17.527233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.195 [2024-11-18 18:11:17.527251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23344b0, cid 4, qid 0 00:12:59.195 [2024-11-18 18:11:17.527305] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.195 [2024-11-18 18:11:17.527311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.195 [2024-11-18 18:11:17.527315] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527319] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d5d30): datao=0, datal=4096, cccid=4 00:12:59.195 [2024-11-18 18:11:17.527323] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23344b0) on tqpair(0x22d5d30): expected_datao=0, payload_size=4096 00:12:59.195 [2024-11-18 18:11:17.527331] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527335] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.195 [2024-11-18 18:11:17.527350] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.195 [2024-11-18 18:11:17.527353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23344b0) on tqpair=0x22d5d30 00:12:59.195 [2024-11-18 18:11:17.527371] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:12:59.195 [2024-11-18 18:11:17.527398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d5d30) 00:12:59.195 [2024-11-18 18:11:17.527416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.195 [2024-11-18 18:11:17.527424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527428] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22d5d30) 00:12:59.195 [2024-11-18 18:11:17.527438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.195 [2024-11-18 18:11:17.527461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23344b0, cid 4, qid 0 00:12:59.195 [2024-11-18 18:11:17.527468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334610, cid 5, qid 0 00:12:59.195 [2024-11-18 18:11:17.527583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.195 [2024-11-18 18:11:17.527592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.195 [2024-11-18 18:11:17.527596] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527599] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d5d30): datao=0, datal=1024, cccid=4 00:12:59.195 [2024-11-18 18:11:17.527604] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23344b0) on tqpair(0x22d5d30): expected_datao=0, payload_size=1024 00:12:59.195 [2024-11-18 18:11:17.527612] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527616] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527622] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.195 [2024-11-18 18:11:17.527627] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.195 [2024-11-18 18:11:17.527631] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334610) on tqpair=0x22d5d30 00:12:59.195 [2024-11-18 18:11:17.527654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.195 [2024-11-18 18:11:17.527661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.195 [2024-11-18 18:11:17.527665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23344b0) on tqpair=0x22d5d30 00:12:59.195 [2024-11-18 18:11:17.527686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527692] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d5d30) 00:12:59.195 [2024-11-18 18:11:17.527704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.195 [2024-11-18 18:11:17.527728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23344b0, cid 4, qid 0 00:12:59.195 [2024-11-18 18:11:17.527794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.195 [2024-11-18 18:11:17.527801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.195 [2024-11-18 18:11:17.527805] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527809] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d5d30): datao=0, datal=3072, cccid=4 00:12:59.195 [2024-11-18 18:11:17.527813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23344b0) on tqpair(0x22d5d30): expected_datao=0, payload_size=3072 00:12:59.195 [2024-11-18 18:11:17.527821] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527825] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.195 [2024-11-18 18:11:17.527839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.195 [2024-11-18 18:11:17.527842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23344b0) on tqpair=0x22d5d30 00:12:59.195 [2024-11-18 18:11:17.527857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.195 [2024-11-18 18:11:17.527866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d5d30) 00:12:59.195 [2024-11-18 18:11:17.527873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.195 [2024-11-18 18:11:17.527895] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23344b0, cid 4, qid 0 00:12:59.195 ===================================================== 00:12:59.195 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:12:59.195 ===================================================== 00:12:59.195 Controller Capabilities/Features 00:12:59.195 ================================ 00:12:59.195 Vendor ID: 0000 00:12:59.195 Subsystem Vendor ID: 0000 00:12:59.195 Serial Number: .................... 00:12:59.195 Model Number: ........................................ 00:12:59.195 Firmware Version: 24.01.1 00:12:59.195 Recommended Arb Burst: 0 00:12:59.195 IEEE OUI Identifier: 00 00 00 00:12:59.195 Multi-path I/O 00:12:59.195 May have multiple subsystem ports: No 00:12:59.195 May have multiple controllers: No 00:12:59.195 Associated with SR-IOV VF: No 00:12:59.195 Max Data Transfer Size: 131072 00:12:59.195 Max Number of Namespaces: 0 00:12:59.195 Max Number of I/O Queues: 1024 00:12:59.195 NVMe Specification Version (VS): 1.3 00:12:59.195 NVMe Specification Version (Identify): 1.3 00:12:59.195 Maximum Queue Entries: 128 00:12:59.195 Contiguous Queues Required: Yes 00:12:59.195 Arbitration Mechanisms Supported 00:12:59.195 Weighted Round Robin: Not Supported 00:12:59.195 Vendor Specific: Not Supported 00:12:59.195 Reset Timeout: 15000 ms 00:12:59.195 Doorbell Stride: 4 bytes 00:12:59.195 NVM Subsystem Reset: Not Supported 00:12:59.195 Command Sets Supported 00:12:59.195 NVM Command Set: Supported 00:12:59.195 Boot Partition: Not Supported 00:12:59.195 Memory Page Size Minimum: 4096 bytes 00:12:59.195 Memory Page Size Maximum: 4096 bytes 00:12:59.195 Persistent Memory Region: Not Supported 00:12:59.195 Optional Asynchronous Events Supported 00:12:59.195 Namespace Attribute Notices: Not Supported 00:12:59.195 Firmware Activation Notices: Not Supported 00:12:59.195 ANA Change Notices: Not Supported 00:12:59.195 PLE Aggregate Log Change Notices: Not Supported 00:12:59.195 LBA Status Info Alert Notices: Not Supported 00:12:59.195 EGE Aggregate Log Change Notices: Not Supported 00:12:59.195 Normal NVM Subsystem Shutdown event: Not Supported 00:12:59.195 Zone Descriptor Change Notices: Not Supported 00:12:59.195 Discovery Log Change Notices: Supported 00:12:59.195 Controller Attributes 00:12:59.195 128-bit Host Identifier: Not Supported 00:12:59.195 Non-Operational Permissive Mode: Not Supported 00:12:59.195 NVM Sets: Not Supported 00:12:59.195 Read Recovery Levels: Not Supported 00:12:59.195 Endurance Groups: Not Supported 00:12:59.196 Predictable Latency Mode: Not Supported 00:12:59.196 Traffic Based Keep ALive: Not Supported 00:12:59.196 Namespace Granularity: Not Supported 00:12:59.196 SQ Associations: Not Supported 00:12:59.196 UUID List: Not Supported 00:12:59.196 Multi-Domain Subsystem: Not Supported 00:12:59.196 Fixed Capacity Management: Not Supported 00:12:59.196 Variable Capacity Management: Not Supported 00:12:59.196 Delete Endurance Group: Not Supported 00:12:59.196 Delete NVM Set: Not Supported 00:12:59.196 Extended LBA Formats Supported: Not Supported 00:12:59.196 Flexible Data Placement Supported: Not Supported 00:12:59.196 00:12:59.196 Controller Memory Buffer Support 00:12:59.196 ================================ 00:12:59.196 Supported: No 00:12:59.196 00:12:59.196 Persistent Memory Region Support 00:12:59.196 ================================ 00:12:59.196 Supported: No 00:12:59.196 00:12:59.196 Admin Command Set Attributes 00:12:59.196 ============================ 00:12:59.196 Security Send/Receive: Not Supported 00:12:59.196 Format NVM: Not Supported 00:12:59.196 Firmware Activate/Download: Not Supported 00:12:59.196 Namespace Management: Not Supported 00:12:59.196 Device Self-Test: Not Supported 00:12:59.196 Directives: Not Supported 00:12:59.196 NVMe-MI: Not Supported 00:12:59.196 Virtualization Management: Not Supported 00:12:59.196 Doorbell Buffer Config: Not Supported 00:12:59.196 Get LBA Status Capability: Not Supported 00:12:59.196 Command & Feature Lockdown Capability: Not Supported 00:12:59.196 Abort Command Limit: 1 00:12:59.196 Async Event Request Limit: 4 00:12:59.196 Number of Firmware Slots: N/A 00:12:59.196 Firmware Slot 1 Read-Only: N/A 00:12:59.196 Firmware Activation Without Reset: N/A 00:12:59.196 Multiple Update Detection Support: N/A 00:12:59.196 Firmware Update Granularity: No Information Provided 00:12:59.196 Per-Namespace SMART Log: No 00:12:59.196 Asymmetric Namespace Access Log Page: Not Supported 00:12:59.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:12:59.196 Command Effects Log Page: Not Supported 00:12:59.196 Get Log Page Extended Data: Supported 00:12:59.196 Telemetry Log Pages: Not Supported 00:12:59.196 Persistent Event Log Pages: Not Supported 00:12:59.196 Supported Log Pages Log Page: May Support 00:12:59.196 Commands Supported & Effects Log Page: Not Supported 00:12:59.196 Feature Identifiers & Effects Log Page:May Support 00:12:59.196 NVMe-MI Commands & Effects Log Page: May Support 00:12:59.196 Data Area 4 for Telemetry Log: Not Supported 00:12:59.196 Error Log Page Entries Supported: 128 00:12:59.196 Keep Alive: Not Supported 00:12:59.196 00:12:59.196 NVM Command Set Attributes 00:12:59.196 ========================== 00:12:59.196 Submission Queue Entry Size 00:12:59.196 Max: 1 00:12:59.196 Min: 1 00:12:59.196 Completion Queue Entry Size 00:12:59.196 Max: 1 00:12:59.196 Min: 1 00:12:59.196 Number of Namespaces: 0 00:12:59.196 Compare Command: Not Supported 00:12:59.196 Write Uncorrectable Command: Not Supported 00:12:59.196 Dataset Management Command: Not Supported 00:12:59.196 Write Zeroes Command: Not Supported 00:12:59.196 Set Features Save Field: Not Supported 00:12:59.196 Reservations: Not Supported 00:12:59.196 Timestamp: Not Supported 00:12:59.196 Copy: Not Supported 00:12:59.196 Volatile Write Cache: Not Present 00:12:59.196 Atomic Write Unit (Normal): 1 00:12:59.196 Atomic Write Unit (PFail): 1 00:12:59.196 Atomic Compare & Write Unit: 1 00:12:59.196 Fused Compare & Write: Supported 00:12:59.196 Scatter-Gather List 00:12:59.196 SGL Command Set: Supported 00:12:59.196 SGL Keyed: Supported 00:12:59.196 SGL Bit Bucket Descriptor: Not Supported 00:12:59.196 SGL Metadata Pointer: Not Supported 00:12:59.196 Oversized SGL: Not Supported 00:12:59.196 SGL Metadata Address: Not Supported 00:12:59.196 SGL Offset: Supported 00:12:59.196 Transport SGL Data Block: Not Supported 00:12:59.196 Replay Protected Memory Block: Not Supported 00:12:59.196 00:12:59.196 Firmware Slot Information 00:12:59.196 ========================= 00:12:59.196 Active slot: 0 00:12:59.196 00:12:59.196 00:12:59.196 Error Log 00:12:59.196 ========= 00:12:59.196 00:12:59.196 Active Namespaces 00:12:59.196 ================= 00:12:59.196 Discovery Log Page 00:12:59.196 ================== 00:12:59.196 Generation Counter: 2 00:12:59.196 Number of Records: 2 00:12:59.196 Record Format: 0 00:12:59.196 00:12:59.196 Discovery Log Entry 0 00:12:59.196 ---------------------- 00:12:59.196 Transport Type: 3 (TCP) 00:12:59.196 Address Family: 1 (IPv4) 00:12:59.196 Subsystem Type: 3 (Current Discovery Subsystem) 00:12:59.196 Entry Flags: 00:12:59.196 Duplicate Returned Information: 1 00:12:59.196 Explicit Persistent Connection Support for Discovery: 1 00:12:59.196 Transport Requirements: 00:12:59.196 Secure Channel: Not Required 00:12:59.196 Port ID: 0 (0x0000) 00:12:59.196 Controller ID: 65535 (0xffff) 00:12:59.196 Admin Max SQ Size: 128 00:12:59.196 Transport Service Identifier: 4420 00:12:59.196 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:12:59.196 Transport Address: 10.0.0.2 00:12:59.196 Discovery Log Entry 1 00:12:59.196 ---------------------- 00:12:59.196 Transport Type: 3 (TCP) 00:12:59.196 Address Family: 1 (IPv4) 00:12:59.196 Subsystem Type: 2 (NVM Subsystem) 00:12:59.196 Entry Flags: 00:12:59.196 Duplicate Returned Information: 0 00:12:59.196 Explicit Persistent Connection Support for Discovery: 0 00:12:59.196 Transport Requirements: 00:12:59.196 Secure Channel: Not Required 00:12:59.196 Port ID: 0 (0x0000) 00:12:59.196 Controller ID: 65535 (0xffff) 00:12:59.196 Admin Max SQ Size: 128 00:12:59.196 Transport Service Identifier: 4420 00:12:59.196 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:12:59.196 Transport Address: 10.0.0.2 [2024-11-18 18:11:17.527957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.196 [2024-11-18 18:11:17.527963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.196 [2024-11-18 18:11:17.527967] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.527971] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d5d30): datao=0, datal=8, cccid=4 00:12:59.196 [2024-11-18 18:11:17.527976] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23344b0) on tqpair(0x22d5d30): expected_datao=0, payload_size=8 00:12:59.196 [2024-11-18 18:11:17.527983] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.527987] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.196 [2024-11-18 18:11:17.528008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.196 [2024-11-18 18:11:17.528011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23344b0) on tqpair=0x22d5d30 00:12:59.196 [2024-11-18 18:11:17.528111] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:12:59.196 [2024-11-18 18:11:17.528127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.196 [2024-11-18 18:11:17.528134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.196 [2024-11-18 18:11:17.528141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.196 [2024-11-18 18:11:17.528147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.196 [2024-11-18 18:11:17.528156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.196 [2024-11-18 18:11:17.528172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.196 [2024-11-18 18:11:17.528194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.196 [2024-11-18 18:11:17.528239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.196 [2024-11-18 18:11:17.528246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.196 [2024-11-18 18:11:17.528250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.196 [2024-11-18 18:11:17.528262] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.196 [2024-11-18 18:11:17.528270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.196 [2024-11-18 18:11:17.528278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528380] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:12:59.197 [2024-11-18 18:11:17.528385] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:12:59.197 [2024-11-18 18:11:17.528395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528403] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528484] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528713] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528720] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528724] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528821] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528829] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.528922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.528929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.528933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.528948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.528956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.528964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.528980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.529020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.529027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.529046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.529060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529069] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.529076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.529091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.529139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.529146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.529149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.529164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529168] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529172] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.529179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.529195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.529240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.529247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.529250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.529265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.529280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.529296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.529341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.529363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.529366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529370] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.529382] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529386] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.529397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.529413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.529458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.529476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.197 [2024-11-18 18:11:17.529481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.197 [2024-11-18 18:11:17.529497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.197 [2024-11-18 18:11:17.529505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.197 [2024-11-18 18:11:17.529513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.197 [2024-11-18 18:11:17.533569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.197 [2024-11-18 18:11:17.533596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.197 [2024-11-18 18:11:17.533604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.533608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.533612] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.198 [2024-11-18 18:11:17.533628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.533633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.533637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d5d30) 00:12:59.198 [2024-11-18 18:11:17.533645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.198 [2024-11-18 18:11:17.533668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2334350, cid 3, qid 0 00:12:59.198 [2024-11-18 18:11:17.533732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.198 [2024-11-18 18:11:17.533739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.533743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.533747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2334350) on tqpair=0x22d5d30 00:12:59.198 [2024-11-18 18:11:17.533756] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:12:59.198 00:12:59.198 18:11:17 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:12:59.198 [2024-11-18 18:11:17.572917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.198 [2024-11-18 18:11:17.572989] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68477 ] 00:12:59.198 [2024-11-18 18:11:17.709325] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:12:59.198 [2024-11-18 18:11:17.709401] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:12:59.198 [2024-11-18 18:11:17.709407] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:12:59.198 [2024-11-18 18:11:17.709419] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:12:59.198 [2024-11-18 18:11:17.709432] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:12:59.198 [2024-11-18 18:11:17.709580] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:12:59.198 [2024-11-18 18:11:17.709634] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22f0d30 0 00:12:59.198 [2024-11-18 18:11:17.716566] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:12:59.198 [2024-11-18 18:11:17.716589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:12:59.198 [2024-11-18 18:11:17.716611] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:12:59.198 [2024-11-18 18:11:17.716615] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:12:59.198 [2024-11-18 18:11:17.716657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.716665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.716669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.198 [2024-11-18 18:11:17.716684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:59.198 [2024-11-18 18:11:17.716715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.198 [2024-11-18 18:11:17.723547] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.198 [2024-11-18 18:11:17.723568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.723589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.198 [2024-11-18 18:11:17.723610] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:12:59.198 [2024-11-18 18:11:17.723618] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:12:59.198 [2024-11-18 18:11:17.723624] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:12:59.198 [2024-11-18 18:11:17.723640] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.198 [2024-11-18 18:11:17.723658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.198 [2024-11-18 18:11:17.723686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.198 [2024-11-18 18:11:17.723736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.198 [2024-11-18 18:11:17.723743] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.723747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.198 [2024-11-18 18:11:17.723758] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:12:59.198 [2024-11-18 18:11:17.723766] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:12:59.198 [2024-11-18 18:11:17.723774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.198 [2024-11-18 18:11:17.723789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.198 [2024-11-18 18:11:17.723808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.198 [2024-11-18 18:11:17.723890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.198 [2024-11-18 18:11:17.723898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.723902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.198 [2024-11-18 18:11:17.723914] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:12:59.198 [2024-11-18 18:11:17.723923] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:12:59.198 [2024-11-18 18:11:17.723931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.723939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.198 [2024-11-18 18:11:17.723947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.198 [2024-11-18 18:11:17.723966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.198 [2024-11-18 18:11:17.724013] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.198 [2024-11-18 18:11:17.724020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.198 [2024-11-18 18:11:17.724024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.724028] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.198 [2024-11-18 18:11:17.724035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:59.198 [2024-11-18 18:11:17.724046] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.724051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.198 [2024-11-18 18:11:17.724054] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.198 [2024-11-18 18:11:17.724062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.198 [2024-11-18 18:11:17.724081] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.724124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.724131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.724135] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.724145] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:12:59.199 [2024-11-18 18:11:17.724151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:12:59.199 [2024-11-18 18:11:17.724159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:59.199 [2024-11-18 18:11:17.724265] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:12:59.199 [2024-11-18 18:11:17.724271] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:59.199 [2024-11-18 18:11:17.724280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.724297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.199 [2024-11-18 18:11:17.724317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.724366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.724374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.724378] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.724389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:59.199 [2024-11-18 18:11:17.724400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.724416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.199 [2024-11-18 18:11:17.724434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.724483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.724490] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.724495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.724505] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:59.199 [2024-11-18 18:11:17.724511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.724519] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:12:59.199 [2024-11-18 18:11:17.724535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.724562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.724579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.199 [2024-11-18 18:11:17.724614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.724714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.199 [2024-11-18 18:11:17.724722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.199 [2024-11-18 18:11:17.724727] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724731] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=4096, cccid=0 00:12:59.199 [2024-11-18 18:11:17.724736] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234ef30) on tqpair(0x22f0d30): expected_datao=0, payload_size=4096 00:12:59.199 [2024-11-18 18:11:17.724746] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724751] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.724768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.724772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.724787] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:12:59.199 [2024-11-18 18:11:17.724792] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:12:59.199 [2024-11-18 18:11:17.724797] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:12:59.199 [2024-11-18 18:11:17.724802] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:12:59.199 [2024-11-18 18:11:17.724807] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:12:59.199 [2024-11-18 18:11:17.724813] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.724827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.724836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.724853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.199 [2024-11-18 18:11:17.724874] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.724941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.724949] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.724953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234ef30) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.724966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.724981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.199 [2024-11-18 18:11:17.724988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.724995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.725001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.199 [2024-11-18 18:11:17.725008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725015] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.725021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.199 [2024-11-18 18:11:17.725028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.725041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.199 [2024-11-18 18:11:17.725046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.725060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:59.199 [2024-11-18 18:11:17.725068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.199 [2024-11-18 18:11:17.725083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.199 [2024-11-18 18:11:17.725104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234ef30, cid 0, qid 0 00:12:59.199 [2024-11-18 18:11:17.725112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f090, cid 1, qid 0 00:12:59.199 [2024-11-18 18:11:17.725117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f1f0, cid 2, qid 0 00:12:59.199 [2024-11-18 18:11:17.725122] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.199 [2024-11-18 18:11:17.725127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.199 [2024-11-18 18:11:17.725215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.199 [2024-11-18 18:11:17.725223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.199 [2024-11-18 18:11:17.725227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.199 [2024-11-18 18:11:17.725231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.199 [2024-11-18 18:11:17.725238] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:12:59.200 [2024-11-18 18:11:17.725243] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725252] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725279] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.725286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.200 [2024-11-18 18:11:17.725306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.200 [2024-11-18 18:11:17.725351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.725359] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.725363] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725367] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.725430] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.725466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.200 [2024-11-18 18:11:17.725486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.200 [2024-11-18 18:11:17.725558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.200 [2024-11-18 18:11:17.725567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.200 [2024-11-18 18:11:17.725571] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725591] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=4096, cccid=4 00:12:59.200 [2024-11-18 18:11:17.725597] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f4b0) on tqpair(0x22f0d30): expected_datao=0, payload_size=4096 00:12:59.200 [2024-11-18 18:11:17.725605] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725609] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.725626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.725630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.725651] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:12:59.200 [2024-11-18 18:11:17.725662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725673] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725691] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.725699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.200 [2024-11-18 18:11:17.725721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.200 [2024-11-18 18:11:17.725795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.200 [2024-11-18 18:11:17.725803] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.200 [2024-11-18 18:11:17.725807] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725811] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=4096, cccid=4 00:12:59.200 [2024-11-18 18:11:17.725816] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f4b0) on tqpair(0x22f0d30): expected_datao=0, payload_size=4096 00:12:59.200 [2024-11-18 18:11:17.725824] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725829] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.725845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.725849] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.725870] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.725891] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725896] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.725900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.725907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.200 [2024-11-18 18:11:17.725929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.200 [2024-11-18 18:11:17.726036] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.200 [2024-11-18 18:11:17.726044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.200 [2024-11-18 18:11:17.726048] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726053] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=4096, cccid=4 00:12:59.200 [2024-11-18 18:11:17.726058] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f4b0) on tqpair(0x22f0d30): expected_datao=0, payload_size=4096 00:12:59.200 [2024-11-18 18:11:17.726066] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726070] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.726086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.726090] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726095] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.726105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726115] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726129] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726143] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726149] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:12:59.200 [2024-11-18 18:11:17.726154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:12:59.200 [2024-11-18 18:11:17.726160] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:12:59.200 [2024-11-18 18:11:17.726177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726182] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.726194] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.200 [2024-11-18 18:11:17.726201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.726216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.200 [2024-11-18 18:11:17.726241] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.200 [2024-11-18 18:11:17.726249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f610, cid 5, qid 0 00:12:59.200 [2024-11-18 18:11:17.726340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.726348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.726352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726356] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.726365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.200 [2024-11-18 18:11:17.726372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.200 [2024-11-18 18:11:17.726376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f610) on tqpair=0x22f0d30 00:12:59.200 [2024-11-18 18:11:17.726392] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.200 [2024-11-18 18:11:17.726401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f0d30) 00:12:59.200 [2024-11-18 18:11:17.726408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726428] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f610, cid 5, qid 0 00:12:59.201 [2024-11-18 18:11:17.726473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.726480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.726485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f610) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.726501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f610, cid 5, qid 0 00:12:59.201 [2024-11-18 18:11:17.726604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.726614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.726618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f610) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.726635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f610, cid 5, qid 0 00:12:59.201 [2024-11-18 18:11:17.726725] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.726733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.726737] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726741] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f610) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.726757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726790] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726813] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.726836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22f0d30) 00:12:59.201 [2024-11-18 18:11:17.726843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.201 [2024-11-18 18:11:17.726864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f610, cid 5, qid 0 00:12:59.201 [2024-11-18 18:11:17.726871] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f4b0, cid 4, qid 0 00:12:59.201 [2024-11-18 18:11:17.726876] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f770, cid 6, qid 0 00:12:59.201 [2024-11-18 18:11:17.726881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f8d0, cid 7, qid 0 00:12:59.201 [2024-11-18 18:11:17.727030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.201 [2024-11-18 18:11:17.727038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.201 [2024-11-18 18:11:17.727042] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727047] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=8192, cccid=5 00:12:59.201 [2024-11-18 18:11:17.727052] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f610) on tqpair(0x22f0d30): expected_datao=0, payload_size=8192 00:12:59.201 [2024-11-18 18:11:17.727070] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727075] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.201 [2024-11-18 18:11:17.727089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.201 [2024-11-18 18:11:17.727093] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727097] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=512, cccid=4 00:12:59.201 [2024-11-18 18:11:17.727102] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f4b0) on tqpair(0x22f0d30): expected_datao=0, payload_size=512 00:12:59.201 [2024-11-18 18:11:17.727109] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727114] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.201 [2024-11-18 18:11:17.727127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.201 [2024-11-18 18:11:17.727131] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727135] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=512, cccid=6 00:12:59.201 [2024-11-18 18:11:17.727140] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f770) on tqpair(0x22f0d30): expected_datao=0, payload_size=512 00:12:59.201 [2024-11-18 18:11:17.727147] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727151] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727158] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:12:59.201 [2024-11-18 18:11:17.727164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:12:59.201 [2024-11-18 18:11:17.727168] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727172] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22f0d30): datao=0, datal=4096, cccid=7 00:12:59.201 [2024-11-18 18:11:17.727177] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234f8d0) on tqpair(0x22f0d30): expected_datao=0, payload_size=4096 00:12:59.201 [2024-11-18 18:11:17.727184] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727189] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.727205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.727209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f610) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.727231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.727239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.727243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f4b0) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.727259] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 [2024-11-18 18:11:17.727266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.201 [2024-11-18 18:11:17.727285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.201 [2024-11-18 18:11:17.727289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f770) on tqpair=0x22f0d30 00:12:59.201 [2024-11-18 18:11:17.727298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.201 ===================================================== 00:12:59.201 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:59.201 ===================================================== 00:12:59.201 Controller Capabilities/Features 00:12:59.201 ================================ 00:12:59.201 Vendor ID: 8086 00:12:59.201 Subsystem Vendor ID: 8086 00:12:59.201 Serial Number: SPDK00000000000001 00:12:59.201 Model Number: SPDK bdev Controller 00:12:59.201 Firmware Version: 24.01.1 00:12:59.201 Recommended Arb Burst: 6 00:12:59.201 IEEE OUI Identifier: e4 d2 5c 00:12:59.201 Multi-path I/O 00:12:59.201 May have multiple subsystem ports: Yes 00:12:59.201 May have multiple controllers: Yes 00:12:59.201 Associated with SR-IOV VF: No 00:12:59.201 Max Data Transfer Size: 131072 00:12:59.201 Max Number of Namespaces: 32 00:12:59.201 Max Number of I/O Queues: 127 00:12:59.201 NVMe Specification Version (VS): 1.3 00:12:59.201 NVMe Specification Version (Identify): 1.3 00:12:59.201 Maximum Queue Entries: 128 00:12:59.201 Contiguous Queues Required: Yes 00:12:59.201 Arbitration Mechanisms Supported 00:12:59.201 Weighted Round Robin: Not Supported 00:12:59.201 Vendor Specific: Not Supported 00:12:59.201 Reset Timeout: 15000 ms 00:12:59.201 Doorbell Stride: 4 bytes 00:12:59.201 NVM Subsystem Reset: Not Supported 00:12:59.201 Command Sets Supported 00:12:59.201 NVM Command Set: Supported 00:12:59.201 Boot Partition: Not Supported 00:12:59.201 Memory Page Size Minimum: 4096 bytes 00:12:59.201 Memory Page Size Maximum: 4096 bytes 00:12:59.202 Persistent Memory Region: Not Supported 00:12:59.202 Optional Asynchronous Events Supported 00:12:59.202 Namespace Attribute Notices: Supported 00:12:59.202 Firmware Activation Notices: Not Supported 00:12:59.202 ANA Change Notices: Not Supported 00:12:59.202 PLE Aggregate Log Change Notices: Not Supported 00:12:59.202 LBA Status Info Alert Notices: Not Supported 00:12:59.202 EGE Aggregate Log Change Notices: Not Supported 00:12:59.202 Normal NVM Subsystem Shutdown event: Not Supported 00:12:59.202 Zone Descriptor Change Notices: Not Supported 00:12:59.202 Discovery Log Change Notices: Not Supported 00:12:59.202 Controller Attributes 00:12:59.202 128-bit Host Identifier: Supported 00:12:59.202 Non-Operational Permissive Mode: Not Supported 00:12:59.202 NVM Sets: Not Supported 00:12:59.202 Read Recovery Levels: Not Supported 00:12:59.202 Endurance Groups: Not Supported 00:12:59.202 Predictable Latency Mode: Not Supported 00:12:59.202 Traffic Based Keep ALive: Not Supported 00:12:59.202 Namespace Granularity: Not Supported 00:12:59.202 SQ Associations: Not Supported 00:12:59.202 UUID List: Not Supported 00:12:59.202 Multi-Domain Subsystem: Not Supported 00:12:59.202 Fixed Capacity Management: Not Supported 00:12:59.202 Variable Capacity Management: Not Supported 00:12:59.202 Delete Endurance Group: Not Supported 00:12:59.202 Delete NVM Set: Not Supported 00:12:59.202 Extended LBA Formats Supported: Not Supported 00:12:59.202 Flexible Data Placement Supported: Not Supported 00:12:59.202 00:12:59.202 Controller Memory Buffer Support 00:12:59.202 ================================ 00:12:59.202 Supported: No 00:12:59.202 00:12:59.202 Persistent Memory Region Support 00:12:59.202 ================================ 00:12:59.202 Supported: No 00:12:59.202 00:12:59.202 Admin Command Set Attributes 00:12:59.202 ============================ 00:12:59.202 Security Send/Receive: Not Supported 00:12:59.202 Format NVM: Not Supported 00:12:59.202 Firmware Activate/Download: Not Supported 00:12:59.202 Namespace Management: Not Supported 00:12:59.202 Device Self-Test: Not Supported 00:12:59.202 Directives: Not Supported 00:12:59.202 NVMe-MI: Not Supported 00:12:59.202 Virtualization Management: Not Supported 00:12:59.202 Doorbell Buffer Config: Not Supported 00:12:59.202 Get LBA Status Capability: Not Supported 00:12:59.202 Command & Feature Lockdown Capability: Not Supported 00:12:59.202 Abort Command Limit: 4 00:12:59.202 Async Event Request Limit: 4 00:12:59.202 Number of Firmware Slots: N/A 00:12:59.202 Firmware Slot 1 Read-Only: N/A 00:12:59.202 Firmware Activation Without Reset: N/A 00:12:59.202 Multiple Update Detection Support: N/A 00:12:59.202 Firmware Update Granularity: No Information Provided 00:12:59.202 Per-Namespace SMART Log: No 00:12:59.202 Asymmetric Namespace Access Log Page: Not Supported 00:12:59.202 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:12:59.202 Command Effects Log Page: Supported 00:12:59.202 Get Log Page Extended Data: Supported 00:12:59.202 Telemetry Log Pages: Not Supported 00:12:59.202 Persistent Event Log Pages: Not Supported 00:12:59.202 Supported Log Pages Log Page: May Support 00:12:59.202 Commands Supported & Effects Log Page: Not Supported 00:12:59.202 Feature Identifiers & Effects Log Page:May Support 00:12:59.202 NVMe-MI Commands & Effects Log Page: May Support 00:12:59.202 Data Area 4 for Telemetry Log: Not Supported 00:12:59.202 Error Log Page Entries Supported: 128 00:12:59.202 Keep Alive: Supported 00:12:59.202 Keep Alive Granularity: 10000 ms 00:12:59.202 00:12:59.202 NVM Command Set Attributes 00:12:59.202 ========================== 00:12:59.202 Submission Queue Entry Size 00:12:59.202 Max: 64 00:12:59.202 Min: 64 00:12:59.202 Completion Queue Entry Size 00:12:59.202 Max: 16 00:12:59.202 Min: 16 00:12:59.202 Number of Namespaces: 32 00:12:59.202 Compare Command: Supported 00:12:59.202 Write Uncorrectable Command: Not Supported 00:12:59.202 Dataset Management Command: Supported 00:12:59.202 Write Zeroes Command: Supported 00:12:59.202 Set Features Save Field: Not Supported 00:12:59.202 Reservations: Supported 00:12:59.202 Timestamp: Not Supported 00:12:59.202 Copy: Supported 00:12:59.202 Volatile Write Cache: Present 00:12:59.202 Atomic Write Unit (Normal): 1 00:12:59.202 Atomic Write Unit (PFail): 1 00:12:59.202 Atomic Compare & Write Unit: 1 00:12:59.202 Fused Compare & Write: Supported 00:12:59.202 Scatter-Gather List 00:12:59.202 SGL Command Set: Supported 00:12:59.202 SGL Keyed: Supported 00:12:59.202 SGL Bit Bucket Descriptor: Not Supported 00:12:59.202 SGL Metadata Pointer: Not Supported 00:12:59.202 Oversized SGL: Not Supported 00:12:59.202 SGL Metadata Address: Not Supported 00:12:59.202 SGL Offset: Supported 00:12:59.202 Transport SGL Data Block: Not Supported 00:12:59.202 Replay Protected Memory Block: Not Supported 00:12:59.202 00:12:59.202 Firmware Slot Information 00:12:59.202 ========================= 00:12:59.202 Active slot: 1 00:12:59.202 Slot 1 Firmware Revision: 24.01.1 00:12:59.202 00:12:59.202 00:12:59.202 Commands Supported and Effects 00:12:59.202 ============================== 00:12:59.202 Admin Commands 00:12:59.202 -------------- 00:12:59.202 Get Log Page (02h): Supported 00:12:59.202 Identify (06h): Supported 00:12:59.202 Abort (08h): Supported 00:12:59.202 Set Features (09h): Supported 00:12:59.202 Get Features (0Ah): Supported 00:12:59.202 Asynchronous Event Request (0Ch): Supported 00:12:59.202 Keep Alive (18h): Supported 00:12:59.202 I/O Commands 00:12:59.202 ------------ 00:12:59.202 Flush (00h): Supported LBA-Change 00:12:59.202 Write (01h): Supported LBA-Change 00:12:59.202 Read (02h): Supported 00:12:59.202 Compare (05h): Supported 00:12:59.202 Write Zeroes (08h): Supported LBA-Change 00:12:59.202 Dataset Management (09h): Supported LBA-Change 00:12:59.202 Copy (19h): Supported LBA-Change 00:12:59.202 Unknown (79h): Supported LBA-Change 00:12:59.202 Unknown (7Ah): Supported 00:12:59.202 00:12:59.202 Error Log 00:12:59.202 ========= 00:12:59.202 00:12:59.202 Arbitration 00:12:59.202 =========== 00:12:59.202 Arbitration Burst: 1 00:12:59.202 00:12:59.202 Power Management 00:12:59.202 ================ 00:12:59.202 Number of Power States: 1 00:12:59.202 Current Power State: Power State #0 00:12:59.202 Power State #0: 00:12:59.202 Max Power: 0.00 W 00:12:59.202 Non-Operational State: Operational 00:12:59.202 Entry Latency: Not Reported 00:12:59.202 Exit Latency: Not Reported 00:12:59.202 Relative Read Throughput: 0 00:12:59.202 Relative Read Latency: 0 00:12:59.202 Relative Write Throughput: 0 00:12:59.202 Relative Write Latency: 0 00:12:59.202 Idle Power: Not Reported 00:12:59.202 Active Power: Not Reported 00:12:59.202 Non-Operational Permissive Mode: Not Supported 00:12:59.202 00:12:59.202 Health Information 00:12:59.202 ================== 00:12:59.202 Critical Warnings: 00:12:59.202 Available Spare Space: OK 00:12:59.202 Temperature: OK 00:12:59.202 Device Reliability: OK 00:12:59.202 Read Only: No 00:12:59.202 Volatile Memory Backup: OK 00:12:59.202 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:59.202 Temperature Threshold: [2024-11-18 18:11:17.727304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.202 [2024-11-18 18:11:17.727308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.202 [2024-11-18 18:11:17.727312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f8d0) on tqpair=0x22f0d30 00:12:59.202 [2024-11-18 18:11:17.727421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.202 [2024-11-18 18:11:17.727428] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.202 [2024-11-18 18:11:17.727432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22f0d30) 00:12:59.202 [2024-11-18 18:11:17.727440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.202 [2024-11-18 18:11:17.727463] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f8d0, cid 7, qid 0 00:12:59.202 [2024-11-18 18:11:17.727520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.202 [2024-11-18 18:11:17.727528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.202 [2024-11-18 18:11:17.727532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.202 [2024-11-18 18:11:17.727536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f8d0) on tqpair=0x22f0d30 00:12:59.202 [2024-11-18 18:11:17.730648] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:12:59.202 [2024-11-18 18:11:17.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.203 [2024-11-18 18:11:17.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.203 [2024-11-18 18:11:17.730694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.203 [2024-11-18 18:11:17.730701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.203 [2024-11-18 18:11:17.730712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.730730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.730758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.730823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.730832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.730836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.730850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.730867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.730891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.730957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.730965] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.730969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.730974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.730980] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:12:59.203 [2024-11-18 18:11:17.730985] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:12:59.203 [2024-11-18 18:11:17.730996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731185] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731201] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731213] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731248] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731301] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731318] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731329] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731334] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731338] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731365] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731572] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731701] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.203 [2024-11-18 18:11:17.731737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.203 [2024-11-18 18:11:17.731755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.203 [2024-11-18 18:11:17.731798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.203 [2024-11-18 18:11:17.731805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.203 [2024-11-18 18:11:17.731810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.203 [2024-11-18 18:11:17.731825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.203 [2024-11-18 18:11:17.731830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.731834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.731841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.731859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.731908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.731916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.731920] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.731924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.731935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.731940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.731944] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.731952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.731970] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732047] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732222] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732356] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732365] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732441] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732495] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732590] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732595] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732707] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732823] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732827] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732839] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732874] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.732918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.732925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.732930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.732946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.732955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.732963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.732981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.733042] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.733049] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.733054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.733058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.204 [2024-11-18 18:11:17.733070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.733089] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.733093] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.204 [2024-11-18 18:11:17.733101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.204 [2024-11-18 18:11:17.733119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.204 [2024-11-18 18:11:17.733196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.204 [2024-11-18 18:11:17.733204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.204 [2024-11-18 18:11:17.733208] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.204 [2024-11-18 18:11:17.733212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733228] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733321] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733377] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733437] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733691] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733783] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733787] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.733904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.733912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.733916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.733932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733937] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.733941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.733958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.733979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.734027] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.734034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.734039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734043] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.734055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734064] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.734072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.734091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.734145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.734152] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.734156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.734173] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.734189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.734208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.734255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.734263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.734267] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734271] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.734283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.734301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.734320] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.734364] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.734372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.734376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734380] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.734392] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734401] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.734409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.734427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.734474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.205 [2024-11-18 18:11:17.734482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.205 [2024-11-18 18:11:17.734486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.205 [2024-11-18 18:11:17.734502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.205 [2024-11-18 18:11:17.734511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.205 [2024-11-18 18:11:17.734519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.205 [2024-11-18 18:11:17.738557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.205 [2024-11-18 18:11:17.738582] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.206 [2024-11-18 18:11:17.738591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.206 [2024-11-18 18:11:17.738596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.206 [2024-11-18 18:11:17.738600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.206 [2024-11-18 18:11:17.738616] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:12:59.206 [2024-11-18 18:11:17.738621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:12:59.206 [2024-11-18 18:11:17.738625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22f0d30) 00:12:59.206 [2024-11-18 18:11:17.738634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.206 [2024-11-18 18:11:17.738659] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234f350, cid 3, qid 0 00:12:59.206 [2024-11-18 18:11:17.738714] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:12:59.206 [2024-11-18 18:11:17.738722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:12:59.206 [2024-11-18 18:11:17.738727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:12:59.206 [2024-11-18 18:11:17.738731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234f350) on tqpair=0x22f0d30 00:12:59.206 [2024-11-18 18:11:17.738741] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:12:59.206 0 Kelvin (-273 Celsius) 00:12:59.206 Available Spare: 0% 00:12:59.206 Available Spare Threshold: 0% 00:12:59.206 Life Percentage Used: 0% 00:12:59.206 Data Units Read: 0 00:12:59.206 Data Units Written: 0 00:12:59.206 Host Read Commands: 0 00:12:59.206 Host Write Commands: 0 00:12:59.206 Controller Busy Time: 0 minutes 00:12:59.206 Power Cycles: 0 00:12:59.206 Power On Hours: 0 hours 00:12:59.206 Unsafe Shutdowns: 0 00:12:59.206 Unrecoverable Media Errors: 0 00:12:59.206 Lifetime Error Log Entries: 0 00:12:59.206 Warning Temperature Time: 0 minutes 00:12:59.206 Critical Temperature Time: 0 minutes 00:12:59.206 00:12:59.206 Number of Queues 00:12:59.206 ================ 00:12:59.206 Number of I/O Submission Queues: 127 00:12:59.206 Number of I/O Completion Queues: 127 00:12:59.206 00:12:59.206 Active Namespaces 00:12:59.206 ================= 00:12:59.206 Namespace ID:1 00:12:59.206 Error Recovery Timeout: Unlimited 00:12:59.206 Command Set Identifier: NVM (00h) 00:12:59.206 Deallocate: Supported 00:12:59.206 Deallocated/Unwritten Error: Not Supported 00:12:59.206 Deallocated Read Value: Unknown 00:12:59.206 Deallocate in Write Zeroes: Not Supported 00:12:59.206 Deallocated Guard Field: 0xFFFF 00:12:59.206 Flush: Supported 00:12:59.206 Reservation: Supported 00:12:59.206 Namespace Sharing Capabilities: Multiple Controllers 00:12:59.206 Size (in LBAs): 131072 (0GiB) 00:12:59.206 Capacity (in LBAs): 131072 (0GiB) 00:12:59.206 Utilization (in LBAs): 131072 (0GiB) 00:12:59.206 NGUID: ABCDEF0123456789ABCDEF0123456789 00:12:59.206 EUI64: ABCDEF0123456789 00:12:59.206 UUID: 600ae5c9-fee2-450e-86a4-2fdb4c858f5b 00:12:59.206 Thin Provisioning: Not Supported 00:12:59.206 Per-NS Atomic Units: Yes 00:12:59.206 Atomic Boundary Size (Normal): 0 00:12:59.206 Atomic Boundary Size (PFail): 0 00:12:59.206 Atomic Boundary Offset: 0 00:12:59.206 Maximum Single Source Range Length: 65535 00:12:59.206 Maximum Copy Length: 65535 00:12:59.206 Maximum Source Range Count: 1 00:12:59.206 NGUID/EUI64 Never Reused: No 00:12:59.206 Namespace Write Protected: No 00:12:59.206 Number of LBA Formats: 1 00:12:59.206 Current LBA Format: LBA Format #00 00:12:59.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:59.206 00:12:59.206 18:11:17 -- host/identify.sh@51 -- # sync 00:12:59.465 18:11:17 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.465 18:11:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.465 18:11:17 -- common/autotest_common.sh@10 -- # set +x 00:12:59.465 18:11:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.465 18:11:17 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:12:59.465 18:11:17 -- host/identify.sh@56 -- # nvmftestfini 00:12:59.465 18:11:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:59.465 18:11:17 -- nvmf/common.sh@116 -- # sync 00:12:59.465 18:11:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.465 18:11:17 -- nvmf/common.sh@119 -- # set +e 00:12:59.465 18:11:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.465 18:11:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.465 rmmod nvme_tcp 00:12:59.465 rmmod nvme_fabrics 00:12:59.465 rmmod nvme_keyring 00:12:59.465 18:11:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:59.465 18:11:17 -- nvmf/common.sh@123 -- # set -e 00:12:59.465 18:11:17 -- nvmf/common.sh@124 -- # return 0 00:12:59.465 18:11:17 -- nvmf/common.sh@477 -- # '[' -n 68440 ']' 00:12:59.465 18:11:17 -- nvmf/common.sh@478 -- # killprocess 68440 00:12:59.465 18:11:17 -- common/autotest_common.sh@936 -- # '[' -z 68440 ']' 00:12:59.465 18:11:17 -- common/autotest_common.sh@940 -- # kill -0 68440 00:12:59.465 18:11:17 -- common/autotest_common.sh@941 -- # uname 00:12:59.465 18:11:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.465 18:11:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68440 00:12:59.465 18:11:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:59.465 18:11:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:59.465 killing process with pid 68440 00:12:59.465 18:11:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68440' 00:12:59.465 18:11:17 -- common/autotest_common.sh@955 -- # kill 68440 00:12:59.465 [2024-11-18 18:11:17.929339] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:59.465 18:11:17 -- common/autotest_common.sh@960 -- # wait 68440 00:12:59.724 18:11:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.724 18:11:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.724 18:11:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.724 18:11:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.724 18:11:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.724 18:11:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.724 18:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.724 18:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.724 18:11:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:59.724 00:12:59.724 real 0m2.468s 00:12:59.724 user 0m6.853s 00:12:59.724 sys 0m0.577s 00:12:59.724 18:11:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.724 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:12:59.724 ************************************ 00:12:59.724 END TEST nvmf_identify 00:12:59.724 ************************************ 00:12:59.724 18:11:18 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:59.724 18:11:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:59.724 18:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.724 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:12:59.724 ************************************ 00:12:59.724 START TEST nvmf_perf 00:12:59.724 ************************************ 00:12:59.724 18:11:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:12:59.724 * Looking for test storage... 00:12:59.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:12:59.724 18:11:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:59.724 18:11:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:59.724 18:11:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:59.984 18:11:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:59.984 18:11:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:59.984 18:11:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:59.984 18:11:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:59.984 18:11:18 -- scripts/common.sh@335 -- # IFS=.-: 00:12:59.984 18:11:18 -- scripts/common.sh@335 -- # read -ra ver1 00:12:59.984 18:11:18 -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.984 18:11:18 -- scripts/common.sh@336 -- # read -ra ver2 00:12:59.984 18:11:18 -- scripts/common.sh@337 -- # local 'op=<' 00:12:59.984 18:11:18 -- scripts/common.sh@339 -- # ver1_l=2 00:12:59.984 18:11:18 -- scripts/common.sh@340 -- # ver2_l=1 00:12:59.984 18:11:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:59.984 18:11:18 -- scripts/common.sh@343 -- # case "$op" in 00:12:59.984 18:11:18 -- scripts/common.sh@344 -- # : 1 00:12:59.984 18:11:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:59.984 18:11:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.984 18:11:18 -- scripts/common.sh@364 -- # decimal 1 00:12:59.984 18:11:18 -- scripts/common.sh@352 -- # local d=1 00:12:59.984 18:11:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.984 18:11:18 -- scripts/common.sh@354 -- # echo 1 00:12:59.984 18:11:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:59.984 18:11:18 -- scripts/common.sh@365 -- # decimal 2 00:12:59.984 18:11:18 -- scripts/common.sh@352 -- # local d=2 00:12:59.984 18:11:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.984 18:11:18 -- scripts/common.sh@354 -- # echo 2 00:12:59.984 18:11:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:59.984 18:11:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:59.984 18:11:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:59.984 18:11:18 -- scripts/common.sh@367 -- # return 0 00:12:59.984 18:11:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.984 18:11:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.984 --rc genhtml_branch_coverage=1 00:12:59.984 --rc genhtml_function_coverage=1 00:12:59.984 --rc genhtml_legend=1 00:12:59.984 --rc geninfo_all_blocks=1 00:12:59.984 --rc geninfo_unexecuted_blocks=1 00:12:59.984 00:12:59.984 ' 00:12:59.984 18:11:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.984 --rc genhtml_branch_coverage=1 00:12:59.984 --rc genhtml_function_coverage=1 00:12:59.984 --rc genhtml_legend=1 00:12:59.984 --rc geninfo_all_blocks=1 00:12:59.984 --rc geninfo_unexecuted_blocks=1 00:12:59.984 00:12:59.984 ' 00:12:59.984 18:11:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.984 --rc genhtml_branch_coverage=1 00:12:59.984 --rc genhtml_function_coverage=1 00:12:59.984 --rc genhtml_legend=1 00:12:59.984 --rc geninfo_all_blocks=1 00:12:59.984 --rc geninfo_unexecuted_blocks=1 00:12:59.984 00:12:59.984 ' 00:12:59.984 18:11:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:59.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.984 --rc genhtml_branch_coverage=1 00:12:59.984 --rc genhtml_function_coverage=1 00:12:59.984 --rc genhtml_legend=1 00:12:59.984 --rc geninfo_all_blocks=1 00:12:59.984 --rc geninfo_unexecuted_blocks=1 00:12:59.984 00:12:59.984 ' 00:12:59.984 18:11:18 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.984 18:11:18 -- nvmf/common.sh@7 -- # uname -s 00:12:59.984 18:11:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.984 18:11:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.984 18:11:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.984 18:11:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.984 18:11:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.984 18:11:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.984 18:11:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.984 18:11:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.984 18:11:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.984 18:11:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:12:59.984 18:11:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:12:59.984 18:11:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.984 18:11:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.984 18:11:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.984 18:11:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.984 18:11:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.984 18:11:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.984 18:11:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.984 18:11:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.984 18:11:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.984 18:11:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.984 18:11:18 -- paths/export.sh@5 -- # export PATH 00:12:59.984 18:11:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.984 18:11:18 -- nvmf/common.sh@46 -- # : 0 00:12:59.984 18:11:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.984 18:11:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.984 18:11:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.984 18:11:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.984 18:11:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.984 18:11:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.984 18:11:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.984 18:11:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.984 18:11:18 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:59.984 18:11:18 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:59.984 18:11:18 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.984 18:11:18 -- host/perf.sh@17 -- # nvmftestinit 00:12:59.984 18:11:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.984 18:11:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.984 18:11:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.984 18:11:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.984 18:11:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.984 18:11:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.984 18:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.984 18:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.984 18:11:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:59.984 18:11:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:59.984 18:11:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.984 18:11:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.984 18:11:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.984 18:11:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:59.984 18:11:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.984 18:11:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.984 18:11:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.984 18:11:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.984 18:11:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.984 18:11:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.984 18:11:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.984 18:11:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.984 18:11:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:59.984 18:11:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.984 Cannot find device "nvmf_tgt_br" 00:12:59.984 18:11:18 -- nvmf/common.sh@154 -- # true 00:12:59.984 18:11:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.984 Cannot find device "nvmf_tgt_br2" 00:12:59.984 18:11:18 -- nvmf/common.sh@155 -- # true 00:12:59.984 18:11:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.984 18:11:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.985 Cannot find device "nvmf_tgt_br" 00:12:59.985 18:11:18 -- nvmf/common.sh@157 -- # true 00:12:59.985 18:11:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.985 Cannot find device "nvmf_tgt_br2" 00:12:59.985 18:11:18 -- nvmf/common.sh@158 -- # true 00:12:59.985 18:11:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.985 18:11:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.985 18:11:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.985 18:11:18 -- nvmf/common.sh@161 -- # true 00:12:59.985 18:11:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.985 18:11:18 -- nvmf/common.sh@162 -- # true 00:12:59.985 18:11:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.985 18:11:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.985 18:11:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.985 18:11:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.244 18:11:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.244 18:11:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.244 18:11:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.244 18:11:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:00.245 18:11:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:00.245 18:11:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:00.245 18:11:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:00.245 18:11:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:00.245 18:11:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:00.245 18:11:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.245 18:11:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.245 18:11:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.245 18:11:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:00.245 18:11:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:00.245 18:11:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.245 18:11:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.245 18:11:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.245 18:11:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.245 18:11:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.245 18:11:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:00.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:00.245 00:13:00.245 --- 10.0.0.2 ping statistics --- 00:13:00.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.245 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:00.245 18:11:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:00.245 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.245 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:13:00.245 00:13:00.245 --- 10.0.0.3 ping statistics --- 00:13:00.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.245 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:00.245 18:11:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:00.245 00:13:00.245 --- 10.0.0.1 ping statistics --- 00:13:00.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.245 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:00.245 18:11:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.245 18:11:18 -- nvmf/common.sh@421 -- # return 0 00:13:00.245 18:11:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.245 18:11:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.245 18:11:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.245 18:11:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.245 18:11:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.245 18:11:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.245 18:11:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.245 18:11:18 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:00.245 18:11:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.245 18:11:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.245 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.245 18:11:18 -- nvmf/common.sh@469 -- # nvmfpid=68650 00:13:00.245 18:11:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.245 18:11:18 -- nvmf/common.sh@470 -- # waitforlisten 68650 00:13:00.245 18:11:18 -- common/autotest_common.sh@829 -- # '[' -z 68650 ']' 00:13:00.245 18:11:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.245 18:11:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.245 18:11:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.245 18:11:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.245 18:11:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.245 [2024-11-18 18:11:18.823474] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.245 [2024-11-18 18:11:18.823901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.503 [2024-11-18 18:11:18.967508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.503 [2024-11-18 18:11:19.022264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.503 [2024-11-18 18:11:19.022747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.503 [2024-11-18 18:11:19.022868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.503 [2024-11-18 18:11:19.023035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.503 [2024-11-18 18:11:19.023237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.503 [2024-11-18 18:11:19.023303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.503 [2024-11-18 18:11:19.023837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.503 [2024-11-18 18:11:19.023849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.439 18:11:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.439 18:11:19 -- common/autotest_common.sh@862 -- # return 0 00:13:01.439 18:11:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:01.439 18:11:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.439 18:11:19 -- common/autotest_common.sh@10 -- # set +x 00:13:01.439 18:11:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.439 18:11:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:01.439 18:11:19 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:01.698 18:11:20 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:01.698 18:11:20 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:01.957 18:11:20 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:01.957 18:11:20 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:02.217 18:11:20 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:02.217 18:11:20 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:02.217 18:11:20 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:02.217 18:11:20 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:02.217 18:11:20 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:02.477 [2024-11-18 18:11:20.941935] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.477 18:11:20 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:02.737 18:11:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:02.737 18:11:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:02.997 18:11:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:02.997 18:11:21 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:03.256 18:11:21 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.516 [2024-11-18 18:11:21.943354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.516 18:11:21 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.775 18:11:22 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:03.775 18:11:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:03.775 18:11:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:03.775 18:11:22 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:04.713 Initializing NVMe Controllers 00:13:04.713 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:04.713 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:04.713 Initialization complete. Launching workers. 00:13:04.713 ======================================================== 00:13:04.713 Latency(us) 00:13:04.713 Device Information : IOPS MiB/s Average min max 00:13:04.713 PCIE (0000:00:06.0) NSID 1 from core 0: 22781.44 88.99 1404.84 357.30 7862.34 00:13:04.713 ======================================================== 00:13:04.713 Total : 22781.44 88.99 1404.84 357.30 7862.34 00:13:04.713 00:13:04.713 18:11:23 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:06.093 Initializing NVMe Controllers 00:13:06.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:06.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:06.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:06.093 Initialization complete. Launching workers. 00:13:06.093 ======================================================== 00:13:06.093 Latency(us) 00:13:06.093 Device Information : IOPS MiB/s Average min max 00:13:06.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3658.97 14.29 272.99 99.48 6283.31 00:13:06.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7976.16 5967.03 12044.07 00:13:06.093 ======================================================== 00:13:06.093 Total : 3784.97 14.79 529.43 99.48 12044.07 00:13:06.093 00:13:06.093 18:11:24 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:07.473 Initializing NVMe Controllers 00:13:07.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:07.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:07.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:07.474 Initialization complete. Launching workers. 00:13:07.474 ======================================================== 00:13:07.474 Latency(us) 00:13:07.474 Device Information : IOPS MiB/s Average min max 00:13:07.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8978.99 35.07 3564.45 392.84 7527.81 00:13:07.474 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4020.76 15.71 8000.41 6083.19 11942.21 00:13:07.474 ======================================================== 00:13:07.474 Total : 12999.75 50.78 4936.47 392.84 11942.21 00:13:07.474 00:13:07.474 18:11:26 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:07.474 18:11:26 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:10.010 Initializing NVMe Controllers 00:13:10.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.010 Controller IO queue size 128, less than required. 00:13:10.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.010 Controller IO queue size 128, less than required. 00:13:10.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:10.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:10.010 Initialization complete. Launching workers. 00:13:10.010 ======================================================== 00:13:10.010 Latency(us) 00:13:10.010 Device Information : IOPS MiB/s Average min max 00:13:10.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1969.66 492.41 66086.50 33581.53 155613.63 00:13:10.010 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 637.23 159.31 206125.16 99563.63 328735.37 00:13:10.010 ======================================================== 00:13:10.010 Total : 2606.88 651.72 100317.58 33581.53 328735.37 00:13:10.010 00:13:10.010 18:11:28 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:10.270 No valid NVMe controllers or AIO or URING devices found 00:13:10.270 Initializing NVMe Controllers 00:13:10.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.270 Controller IO queue size 128, less than required. 00:13:10.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.270 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:10.270 Controller IO queue size 128, less than required. 00:13:10.270 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:10.270 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:10.270 WARNING: Some requested NVMe devices were skipped 00:13:10.270 18:11:28 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:12.805 Initializing NVMe Controllers 00:13:12.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.805 Controller IO queue size 128, less than required. 00:13:12.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.805 Controller IO queue size 128, less than required. 00:13:12.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:12.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:12.805 Initialization complete. Launching workers. 00:13:12.805 00:13:12.805 ==================== 00:13:12.805 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:12.805 TCP transport: 00:13:12.805 polls: 7362 00:13:12.805 idle_polls: 0 00:13:12.805 sock_completions: 7362 00:13:12.805 nvme_completions: 6835 00:13:12.805 submitted_requests: 10317 00:13:12.805 queued_requests: 1 00:13:12.805 00:13:12.805 ==================== 00:13:12.805 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:12.805 TCP transport: 00:13:12.805 polls: 7579 00:13:12.805 idle_polls: 0 00:13:12.805 sock_completions: 7579 00:13:12.805 nvme_completions: 6778 00:13:12.805 submitted_requests: 10399 00:13:12.805 queued_requests: 1 00:13:12.805 ======================================================== 00:13:12.805 Latency(us) 00:13:12.805 Device Information : IOPS MiB/s Average min max 00:13:12.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1769.59 442.40 74554.88 36564.87 128931.63 00:13:12.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1755.11 438.78 74012.35 36545.88 134289.05 00:13:12.805 ======================================================== 00:13:12.805 Total : 3524.70 881.18 74284.73 36545.88 134289.05 00:13:12.805 00:13:12.805 18:11:31 -- host/perf.sh@66 -- # sync 00:13:12.805 18:11:31 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.065 18:11:31 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:13:13.065 18:11:31 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:13:13.065 18:11:31 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:13:13.324 18:11:31 -- host/perf.sh@72 -- # ls_guid=dac2fb99-4433-4140-bfa5-dc195c21b555 00:13:13.324 18:11:31 -- host/perf.sh@73 -- # get_lvs_free_mb dac2fb99-4433-4140-bfa5-dc195c21b555 00:13:13.324 18:11:31 -- common/autotest_common.sh@1353 -- # local lvs_uuid=dac2fb99-4433-4140-bfa5-dc195c21b555 00:13:13.324 18:11:31 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:13.324 18:11:31 -- common/autotest_common.sh@1355 -- # local fc 00:13:13.324 18:11:31 -- common/autotest_common.sh@1356 -- # local cs 00:13:13.324 18:11:31 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:13.583 18:11:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:13.583 { 00:13:13.583 "uuid": "dac2fb99-4433-4140-bfa5-dc195c21b555", 00:13:13.583 "name": "lvs_0", 00:13:13.583 "base_bdev": "Nvme0n1", 00:13:13.583 "total_data_clusters": 1278, 00:13:13.583 "free_clusters": 1278, 00:13:13.583 "block_size": 4096, 00:13:13.583 "cluster_size": 4194304 00:13:13.583 } 00:13:13.583 ]' 00:13:13.583 18:11:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="dac2fb99-4433-4140-bfa5-dc195c21b555") .free_clusters' 00:13:13.842 18:11:32 -- common/autotest_common.sh@1358 -- # fc=1278 00:13:13.842 18:11:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="dac2fb99-4433-4140-bfa5-dc195c21b555") .cluster_size' 00:13:13.842 5112 00:13:13.842 18:11:32 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:13.842 18:11:32 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:13:13.842 18:11:32 -- common/autotest_common.sh@1363 -- # echo 5112 00:13:13.842 18:11:32 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:13:13.842 18:11:32 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dac2fb99-4433-4140-bfa5-dc195c21b555 lbd_0 5112 00:13:14.102 18:11:32 -- host/perf.sh@80 -- # lb_guid=279bc932-5c1f-4d56-a64d-e7069f2d035a 00:13:14.102 18:11:32 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 279bc932-5c1f-4d56-a64d-e7069f2d035a lvs_n_0 00:13:14.361 18:11:32 -- host/perf.sh@83 -- # ls_nested_guid=d41d3e58-02e8-4a30-ae49-4b86fdfc4b49 00:13:14.361 18:11:32 -- host/perf.sh@84 -- # get_lvs_free_mb d41d3e58-02e8-4a30-ae49-4b86fdfc4b49 00:13:14.361 18:11:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=d41d3e58-02e8-4a30-ae49-4b86fdfc4b49 00:13:14.361 18:11:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:14.361 18:11:32 -- common/autotest_common.sh@1355 -- # local fc 00:13:14.361 18:11:32 -- common/autotest_common.sh@1356 -- # local cs 00:13:14.361 18:11:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:14.621 18:11:33 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:14.621 { 00:13:14.621 "uuid": "dac2fb99-4433-4140-bfa5-dc195c21b555", 00:13:14.621 "name": "lvs_0", 00:13:14.621 "base_bdev": "Nvme0n1", 00:13:14.621 "total_data_clusters": 1278, 00:13:14.621 "free_clusters": 0, 00:13:14.621 "block_size": 4096, 00:13:14.621 "cluster_size": 4194304 00:13:14.621 }, 00:13:14.621 { 00:13:14.621 "uuid": "d41d3e58-02e8-4a30-ae49-4b86fdfc4b49", 00:13:14.621 "name": "lvs_n_0", 00:13:14.621 "base_bdev": "279bc932-5c1f-4d56-a64d-e7069f2d035a", 00:13:14.621 "total_data_clusters": 1276, 00:13:14.621 "free_clusters": 1276, 00:13:14.621 "block_size": 4096, 00:13:14.621 "cluster_size": 4194304 00:13:14.621 } 00:13:14.621 ]' 00:13:14.621 18:11:33 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="d41d3e58-02e8-4a30-ae49-4b86fdfc4b49") .free_clusters' 00:13:14.921 18:11:33 -- common/autotest_common.sh@1358 -- # fc=1276 00:13:14.921 18:11:33 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="d41d3e58-02e8-4a30-ae49-4b86fdfc4b49") .cluster_size' 00:13:14.921 5104 00:13:14.921 18:11:33 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:14.921 18:11:33 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:13:14.921 18:11:33 -- common/autotest_common.sh@1363 -- # echo 5104 00:13:14.921 18:11:33 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:13:14.921 18:11:33 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d41d3e58-02e8-4a30-ae49-4b86fdfc4b49 lbd_nest_0 5104 00:13:15.208 18:11:33 -- host/perf.sh@88 -- # lb_nested_guid=051a55c8-83a0-4a7a-a52e-4d133e77a5b9 00:13:15.208 18:11:33 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:15.208 18:11:33 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:13:15.208 18:11:33 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 051a55c8-83a0-4a7a-a52e-4d133e77a5b9 00:13:15.468 18:11:34 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.727 18:11:34 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:13:15.727 18:11:34 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:13:15.727 18:11:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:15.727 18:11:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:15.727 18:11:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:15.986 No valid NVMe controllers or AIO or URING devices found 00:13:16.246 Initializing NVMe Controllers 00:13:16.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.246 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:16.246 WARNING: Some requested NVMe devices were skipped 00:13:16.246 18:11:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:16.246 18:11:34 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:28.455 Initializing NVMe Controllers 00:13:28.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:28.455 Initialization complete. Launching workers. 00:13:28.455 ======================================================== 00:13:28.455 Latency(us) 00:13:28.455 Device Information : IOPS MiB/s Average min max 00:13:28.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 976.21 122.03 1023.17 324.97 8543.31 00:13:28.455 ======================================================== 00:13:28.455 Total : 976.21 122.03 1023.17 324.97 8543.31 00:13:28.455 00:13:28.455 18:11:44 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:28.455 18:11:44 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:28.455 18:11:44 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:28.455 No valid NVMe controllers or AIO or URING devices found 00:13:28.455 Initializing NVMe Controllers 00:13:28.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.455 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:28.455 WARNING: Some requested NVMe devices were skipped 00:13:28.455 18:11:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:28.455 18:11:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:38.436 Initializing NVMe Controllers 00:13:38.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:38.436 Initialization complete. Launching workers. 00:13:38.436 ======================================================== 00:13:38.436 Latency(us) 00:13:38.436 Device Information : IOPS MiB/s Average min max 00:13:38.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1359.30 169.91 23568.61 6290.84 58651.25 00:13:38.436 ======================================================== 00:13:38.436 Total : 1359.30 169.91 23568.61 6290.84 58651.25 00:13:38.436 00:13:38.436 18:11:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:38.436 18:11:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:38.436 18:11:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:38.436 No valid NVMe controllers or AIO or URING devices found 00:13:38.436 Initializing NVMe Controllers 00:13:38.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.436 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:38.436 WARNING: Some requested NVMe devices were skipped 00:13:38.436 18:11:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:38.436 18:11:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:48.461 Initializing NVMe Controllers 00:13:48.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.461 Controller IO queue size 128, less than required. 00:13:48.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:48.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:48.461 Initialization complete. Launching workers. 00:13:48.461 ======================================================== 00:13:48.461 Latency(us) 00:13:48.461 Device Information : IOPS MiB/s Average min max 00:13:48.461 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4052.00 506.50 31649.96 7945.63 68424.06 00:13:48.461 ======================================================== 00:13:48.461 Total : 4052.00 506.50 31649.96 7945.63 68424.06 00:13:48.461 00:13:48.461 18:12:06 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.461 18:12:06 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 051a55c8-83a0-4a7a-a52e-4d133e77a5b9 00:13:48.461 18:12:06 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:13:48.720 18:12:07 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 279bc932-5c1f-4d56-a64d-e7069f2d035a 00:13:48.979 18:12:07 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:13:49.238 18:12:07 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:13:49.238 18:12:07 -- host/perf.sh@114 -- # nvmftestfini 00:13:49.238 18:12:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:49.238 18:12:07 -- nvmf/common.sh@116 -- # sync 00:13:49.238 18:12:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:49.238 18:12:07 -- nvmf/common.sh@119 -- # set +e 00:13:49.238 18:12:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:49.238 18:12:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:49.238 rmmod nvme_tcp 00:13:49.238 rmmod nvme_fabrics 00:13:49.238 rmmod nvme_keyring 00:13:49.238 18:12:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:49.238 18:12:07 -- nvmf/common.sh@123 -- # set -e 00:13:49.238 18:12:07 -- nvmf/common.sh@124 -- # return 0 00:13:49.238 18:12:07 -- nvmf/common.sh@477 -- # '[' -n 68650 ']' 00:13:49.238 18:12:07 -- nvmf/common.sh@478 -- # killprocess 68650 00:13:49.238 18:12:07 -- common/autotest_common.sh@936 -- # '[' -z 68650 ']' 00:13:49.238 18:12:07 -- common/autotest_common.sh@940 -- # kill -0 68650 00:13:49.238 18:12:07 -- common/autotest_common.sh@941 -- # uname 00:13:49.238 18:12:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.238 18:12:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68650 00:13:49.238 killing process with pid 68650 00:13:49.238 18:12:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:49.238 18:12:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:49.238 18:12:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68650' 00:13:49.238 18:12:07 -- common/autotest_common.sh@955 -- # kill 68650 00:13:49.238 18:12:07 -- common/autotest_common.sh@960 -- # wait 68650 00:13:50.617 18:12:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.617 18:12:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.617 18:12:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.617 18:12:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.617 18:12:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.617 18:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.617 18:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.617 18:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.617 18:12:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.617 ************************************ 00:13:50.617 END TEST nvmf_perf 00:13:50.617 ************************************ 00:13:50.617 00:13:50.617 real 0m51.012s 00:13:50.617 user 3m12.245s 00:13:50.617 sys 0m12.158s 00:13:50.617 18:12:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.617 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:50.877 18:12:09 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:50.877 18:12:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:50.877 18:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.877 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:50.877 ************************************ 00:13:50.877 START TEST nvmf_fio_host 00:13:50.877 ************************************ 00:13:50.877 18:12:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:13:50.877 * Looking for test storage... 00:13:50.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:50.877 18:12:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:50.877 18:12:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:50.877 18:12:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:50.877 18:12:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:50.877 18:12:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:50.877 18:12:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:50.877 18:12:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:50.877 18:12:09 -- scripts/common.sh@335 -- # IFS=.-: 00:13:50.877 18:12:09 -- scripts/common.sh@335 -- # read -ra ver1 00:13:50.877 18:12:09 -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.877 18:12:09 -- scripts/common.sh@336 -- # read -ra ver2 00:13:50.877 18:12:09 -- scripts/common.sh@337 -- # local 'op=<' 00:13:50.877 18:12:09 -- scripts/common.sh@339 -- # ver1_l=2 00:13:50.877 18:12:09 -- scripts/common.sh@340 -- # ver2_l=1 00:13:50.877 18:12:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:50.877 18:12:09 -- scripts/common.sh@343 -- # case "$op" in 00:13:50.877 18:12:09 -- scripts/common.sh@344 -- # : 1 00:13:50.877 18:12:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:50.877 18:12:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.877 18:12:09 -- scripts/common.sh@364 -- # decimal 1 00:13:50.877 18:12:09 -- scripts/common.sh@352 -- # local d=1 00:13:50.877 18:12:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.877 18:12:09 -- scripts/common.sh@354 -- # echo 1 00:13:50.877 18:12:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:50.877 18:12:09 -- scripts/common.sh@365 -- # decimal 2 00:13:50.877 18:12:09 -- scripts/common.sh@352 -- # local d=2 00:13:50.877 18:12:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.877 18:12:09 -- scripts/common.sh@354 -- # echo 2 00:13:50.877 18:12:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:50.877 18:12:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:50.877 18:12:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:50.877 18:12:09 -- scripts/common.sh@367 -- # return 0 00:13:50.877 18:12:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.877 18:12:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:50.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.877 --rc genhtml_branch_coverage=1 00:13:50.877 --rc genhtml_function_coverage=1 00:13:50.877 --rc genhtml_legend=1 00:13:50.877 --rc geninfo_all_blocks=1 00:13:50.877 --rc geninfo_unexecuted_blocks=1 00:13:50.877 00:13:50.877 ' 00:13:50.877 18:12:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:50.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.877 --rc genhtml_branch_coverage=1 00:13:50.877 --rc genhtml_function_coverage=1 00:13:50.877 --rc genhtml_legend=1 00:13:50.877 --rc geninfo_all_blocks=1 00:13:50.877 --rc geninfo_unexecuted_blocks=1 00:13:50.877 00:13:50.877 ' 00:13:50.877 18:12:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.878 --rc genhtml_branch_coverage=1 00:13:50.878 --rc genhtml_function_coverage=1 00:13:50.878 --rc genhtml_legend=1 00:13:50.878 --rc geninfo_all_blocks=1 00:13:50.878 --rc geninfo_unexecuted_blocks=1 00:13:50.878 00:13:50.878 ' 00:13:50.878 18:12:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.878 --rc genhtml_branch_coverage=1 00:13:50.878 --rc genhtml_function_coverage=1 00:13:50.878 --rc genhtml_legend=1 00:13:50.878 --rc geninfo_all_blocks=1 00:13:50.878 --rc geninfo_unexecuted_blocks=1 00:13:50.878 00:13:50.878 ' 00:13:50.878 18:12:09 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.878 18:12:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.878 18:12:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.878 18:12:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.878 18:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@5 -- # export PATH 00:13:50.878 18:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.878 18:12:09 -- nvmf/common.sh@7 -- # uname -s 00:13:50.878 18:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.878 18:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.878 18:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.878 18:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.878 18:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.878 18:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.878 18:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.878 18:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.878 18:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.878 18:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:13:50.878 18:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:13:50.878 18:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.878 18:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.878 18:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.878 18:12:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.878 18:12:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.878 18:12:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.878 18:12:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.878 18:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- paths/export.sh@5 -- # export PATH 00:13:50.878 18:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.878 18:12:09 -- nvmf/common.sh@46 -- # : 0 00:13:50.878 18:12:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.878 18:12:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.878 18:12:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.878 18:12:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.878 18:12:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.878 18:12:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.878 18:12:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.878 18:12:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.878 18:12:09 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.878 18:12:09 -- host/fio.sh@14 -- # nvmftestinit 00:13:50.878 18:12:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.878 18:12:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.878 18:12:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.878 18:12:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.878 18:12:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.878 18:12:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.878 18:12:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.878 18:12:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.878 18:12:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.878 18:12:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.878 18:12:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.878 18:12:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.878 18:12:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.878 18:12:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.878 18:12:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.878 18:12:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.878 18:12:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.878 18:12:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.878 18:12:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.878 18:12:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.878 18:12:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.878 18:12:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.878 18:12:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:51.137 18:12:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:51.137 Cannot find device "nvmf_tgt_br" 00:13:51.137 18:12:09 -- nvmf/common.sh@154 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.137 Cannot find device "nvmf_tgt_br2" 00:13:51.137 18:12:09 -- nvmf/common.sh@155 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:51.137 18:12:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:51.137 Cannot find device "nvmf_tgt_br" 00:13:51.137 18:12:09 -- nvmf/common.sh@157 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:51.137 Cannot find device "nvmf_tgt_br2" 00:13:51.137 18:12:09 -- nvmf/common.sh@158 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:51.137 18:12:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:51.137 18:12:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.137 18:12:09 -- nvmf/common.sh@161 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.137 18:12:09 -- nvmf/common.sh@162 -- # true 00:13:51.137 18:12:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.137 18:12:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.138 18:12:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.138 18:12:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.138 18:12:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.138 18:12:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.138 18:12:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.138 18:12:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.138 18:12:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.138 18:12:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:51.138 18:12:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:51.138 18:12:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:51.138 18:12:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:51.138 18:12:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.138 18:12:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.138 18:12:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.138 18:12:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:51.138 18:12:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:51.138 18:12:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.138 18:12:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.138 18:12:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.397 18:12:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.397 18:12:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.397 18:12:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:51.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:51.397 00:13:51.397 --- 10.0.0.2 ping statistics --- 00:13:51.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.397 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:51.397 18:12:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:51.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:51.397 00:13:51.397 --- 10.0.0.3 ping statistics --- 00:13:51.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.397 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:51.397 18:12:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:51.397 00:13:51.397 --- 10.0.0.1 ping statistics --- 00:13:51.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.397 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:51.397 18:12:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.397 18:12:09 -- nvmf/common.sh@421 -- # return 0 00:13:51.397 18:12:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.397 18:12:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.397 18:12:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.397 18:12:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.397 18:12:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.397 18:12:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.397 18:12:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.397 18:12:09 -- host/fio.sh@16 -- # [[ y != y ]] 00:13:51.397 18:12:09 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:13:51.397 18:12:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.397 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.397 18:12:09 -- host/fio.sh@24 -- # nvmfpid=69485 00:13:51.397 18:12:09 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.397 18:12:09 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.397 18:12:09 -- host/fio.sh@28 -- # waitforlisten 69485 00:13:51.397 18:12:09 -- common/autotest_common.sh@829 -- # '[' -z 69485 ']' 00:13:51.397 18:12:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.397 18:12:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.397 18:12:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.397 18:12:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.397 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:51.397 [2024-11-18 18:12:09.853583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:51.397 [2024-11-18 18:12:09.854470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.656 [2024-11-18 18:12:10.000670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.656 [2024-11-18 18:12:10.072093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.656 [2024-11-18 18:12:10.072485] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.656 [2024-11-18 18:12:10.072658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.656 [2024-11-18 18:12:10.072888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.656 [2024-11-18 18:12:10.073123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.656 [2024-11-18 18:12:10.073269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.656 [2024-11-18 18:12:10.073864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.656 [2024-11-18 18:12:10.073901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.589 18:12:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.589 18:12:10 -- common/autotest_common.sh@862 -- # return 0 00:13:52.589 18:12:10 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.589 [2024-11-18 18:12:11.103909] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.589 18:12:11 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:13:52.589 18:12:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.589 18:12:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.589 18:12:11 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:53.154 Malloc1 00:13:53.154 18:12:11 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.154 18:12:11 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:53.412 18:12:11 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.669 [2024-11-18 18:12:12.173641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.669 18:12:12 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.927 18:12:12 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:53.927 18:12:12 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:53.928 18:12:12 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:53.928 18:12:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:53.928 18:12:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.928 18:12:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:53.928 18:12:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.928 18:12:12 -- common/autotest_common.sh@1330 -- # shift 00:13:53.928 18:12:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:53.928 18:12:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:53.928 18:12:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:53.928 18:12:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:53.928 18:12:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:53.928 18:12:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:53.928 18:12:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:53.928 18:12:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:13:54.185 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:54.185 fio-3.35 00:13:54.185 Starting 1 thread 00:13:56.726 00:13:56.727 test: (groupid=0, jobs=1): err= 0: pid=69563: Mon Nov 18 18:12:14 2024 00:13:56.727 read: IOPS=9532, BW=37.2MiB/s (39.0MB/s)(74.7MiB/2006msec) 00:13:56.727 slat (nsec): min=1930, max=393625, avg=2721.81, stdev=3903.68 00:13:56.727 clat (usec): min=2745, max=12242, avg=6968.52, stdev=542.51 00:13:56.727 lat (usec): min=2806, max=12244, avg=6971.24, stdev=542.46 00:13:56.727 clat percentiles (usec): 00:13:56.727 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:13:56.727 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:13:56.727 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7832], 00:13:56.727 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[10290], 99.95th=[11600], 00:13:56.727 | 99.99th=[12125] 00:13:56.727 bw ( KiB/s): min=36632, max=39720, per=99.97%, avg=38120.00, stdev=1273.70, samples=4 00:13:56.727 iops : min= 9158, max= 9930, avg=9530.00, stdev=318.43, samples=4 00:13:56.727 write: IOPS=9543, BW=37.3MiB/s (39.1MB/s)(74.8MiB/2006msec); 0 zone resets 00:13:56.727 slat (usec): min=2, max=278, avg= 2.87, stdev= 2.72 00:13:56.727 clat (usec): min=2579, max=11790, avg=6396.32, stdev=504.74 00:13:56.727 lat (usec): min=2606, max=11792, avg=6399.19, stdev=504.81 00:13:56.727 clat percentiles (usec): 00:13:56.727 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:13:56.727 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:13:56.727 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7242], 00:13:56.727 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[10159], 99.95th=[10814], 00:13:56.727 | 99.99th=[11731] 00:13:56.727 bw ( KiB/s): min=37512, max=38848, per=99.96%, avg=38162.00, stdev=623.05, samples=4 00:13:56.727 iops : min= 9378, max= 9712, avg=9540.50, stdev=155.76, samples=4 00:13:56.727 lat (msec) : 4=0.07%, 10=99.81%, 20=0.12% 00:13:56.727 cpu : usr=67.03%, sys=23.49%, ctx=36, majf=0, minf=5 00:13:56.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:13:56.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:56.727 issued rwts: total=19123,19145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:56.727 00:13:56.727 Run status group 0 (all jobs): 00:13:56.727 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.7MiB (78.3MB), run=2006-2006msec 00:13:56.727 WRITE: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.8MiB (78.4MB), run=2006-2006msec 00:13:56.727 18:12:14 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:56.727 18:12:14 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:56.727 18:12:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:13:56.727 18:12:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:56.727 18:12:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:13:56.727 18:12:14 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:56.727 18:12:14 -- common/autotest_common.sh@1330 -- # shift 00:13:56.727 18:12:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:13:56.727 18:12:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:56.727 18:12:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:56.727 18:12:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:13:56.727 18:12:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:13:56.727 18:12:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:13:56.727 18:12:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:56.727 18:12:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:13:56.727 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:13:56.727 fio-3.35 00:13:56.727 Starting 1 thread 00:13:59.253 00:13:59.253 test: (groupid=0, jobs=1): err= 0: pid=69611: Mon Nov 18 18:12:17 2024 00:13:59.253 read: IOPS=8571, BW=134MiB/s (140MB/s)(269MiB/2009msec) 00:13:59.253 slat (usec): min=2, max=132, avg= 3.79, stdev= 2.61 00:13:59.253 clat (usec): min=2956, max=16695, avg=8190.69, stdev=2512.33 00:13:59.253 lat (usec): min=2959, max=16698, avg=8194.48, stdev=2512.45 00:13:59.253 clat percentiles (usec): 00:13:59.253 | 1.00th=[ 4047], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5866], 00:13:59.253 | 30.00th=[ 6521], 40.00th=[ 7242], 50.00th=[ 7898], 60.00th=[ 8586], 00:13:59.253 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11469], 95.00th=[13042], 00:13:59.253 | 99.00th=[15008], 99.50th=[15926], 99.90th=[16450], 99.95th=[16581], 00:13:59.253 | 99.99th=[16712] 00:13:59.253 bw ( KiB/s): min=61760, max=80064, per=51.94%, avg=71232.00, stdev=9762.05, samples=4 00:13:59.253 iops : min= 3860, max= 5004, avg=4452.00, stdev=610.13, samples=4 00:13:59.253 write: IOPS=5063, BW=79.1MiB/s (83.0MB/s)(145MiB/1831msec); 0 zone resets 00:13:59.253 slat (usec): min=32, max=361, avg=38.91, stdev= 8.76 00:13:59.253 clat (usec): min=2946, max=20069, avg=11626.41, stdev=2122.57 00:13:59.253 lat (usec): min=2980, max=20107, avg=11665.32, stdev=2123.23 00:13:59.253 clat percentiles (usec): 00:13:59.253 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:13:59.253 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:13:59.253 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15533], 00:13:59.253 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19268], 99.95th=[19792], 00:13:59.253 | 99.99th=[20055] 00:13:59.253 bw ( KiB/s): min=65216, max=83008, per=91.55%, avg=74176.00, stdev=9134.90, samples=4 00:13:59.253 iops : min= 4076, max= 5188, avg=4636.00, stdev=570.93, samples=4 00:13:59.253 lat (msec) : 4=0.59%, 10=57.33%, 20=42.07%, 50=0.01% 00:13:59.253 cpu : usr=82.58%, sys=12.94%, ctx=4, majf=0, minf=4 00:13:59.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:59.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.253 issued rwts: total=17220,9272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.253 00:13:59.253 Run status group 0 (all jobs): 00:13:59.253 READ: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=269MiB (282MB), run=2009-2009msec 00:13:59.253 WRITE: bw=79.1MiB/s (83.0MB/s), 79.1MiB/s-79.1MiB/s (83.0MB/s-83.0MB/s), io=145MiB (152MB), run=1831-1831msec 00:13:59.253 18:12:17 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.253 18:12:17 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:13:59.253 18:12:17 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:13:59.253 18:12:17 -- host/fio.sh@51 -- # get_nvme_bdfs 00:13:59.253 18:12:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:13:59.253 18:12:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:13:59.253 18:12:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:59.253 18:12:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:59.253 18:12:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:13:59.253 18:12:17 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:13:59.253 18:12:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:13:59.253 18:12:17 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:13:59.512 Nvme0n1 00:13:59.512 18:12:18 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:13:59.769 18:12:18 -- host/fio.sh@53 -- # ls_guid=8a7fe143-200c-41ce-b3d0-c14f786b490d 00:13:59.769 18:12:18 -- host/fio.sh@54 -- # get_lvs_free_mb 8a7fe143-200c-41ce-b3d0-c14f786b490d 00:13:59.769 18:12:18 -- common/autotest_common.sh@1353 -- # local lvs_uuid=8a7fe143-200c-41ce-b3d0-c14f786b490d 00:13:59.769 18:12:18 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:59.769 18:12:18 -- common/autotest_common.sh@1355 -- # local fc 00:13:59.769 18:12:18 -- common/autotest_common.sh@1356 -- # local cs 00:13:59.769 18:12:18 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:00.027 18:12:18 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:00.027 { 00:14:00.027 "uuid": "8a7fe143-200c-41ce-b3d0-c14f786b490d", 00:14:00.027 "name": "lvs_0", 00:14:00.027 "base_bdev": "Nvme0n1", 00:14:00.027 "total_data_clusters": 4, 00:14:00.027 "free_clusters": 4, 00:14:00.027 "block_size": 4096, 00:14:00.027 "cluster_size": 1073741824 00:14:00.027 } 00:14:00.027 ]' 00:14:00.027 18:12:18 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="8a7fe143-200c-41ce-b3d0-c14f786b490d") .free_clusters' 00:14:00.027 18:12:18 -- common/autotest_common.sh@1358 -- # fc=4 00:14:00.027 18:12:18 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="8a7fe143-200c-41ce-b3d0-c14f786b490d") .cluster_size' 00:14:00.027 18:12:18 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:14:00.027 18:12:18 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:14:00.027 4096 00:14:00.027 18:12:18 -- common/autotest_common.sh@1363 -- # echo 4096 00:14:00.027 18:12:18 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:00.285 6f72ddc8-a968-47bd-86f2-c725d68b5a06 00:14:00.285 18:12:18 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:00.543 18:12:19 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:00.807 18:12:19 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:01.066 18:12:19 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:01.066 18:12:19 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:01.066 18:12:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:01.066 18:12:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:01.066 18:12:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:01.066 18:12:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:01.066 18:12:19 -- common/autotest_common.sh@1330 -- # shift 00:14:01.066 18:12:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:01.066 18:12:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:01.066 18:12:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:01.066 18:12:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:01.066 18:12:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:01.066 18:12:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:01.066 18:12:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:01.066 18:12:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:01.066 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:01.066 fio-3.35 00:14:01.066 Starting 1 thread 00:14:03.596 00:14:03.597 test: (groupid=0, jobs=1): err= 0: pid=69720: Mon Nov 18 18:12:21 2024 00:14:03.597 read: IOPS=6399, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2008msec) 00:14:03.597 slat (usec): min=2, max=314, avg= 2.88, stdev= 3.93 00:14:03.597 clat (usec): min=3055, max=18481, avg=10429.81, stdev=872.55 00:14:03.597 lat (usec): min=3065, max=18483, avg=10432.68, stdev=872.29 00:14:03.597 clat percentiles (usec): 00:14:03.597 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:14:03.597 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:14:03.597 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:14:03.597 | 99.00th=[12387], 99.50th=[12780], 99.90th=[16450], 99.95th=[17695], 00:14:03.597 | 99.99th=[18482] 00:14:03.597 bw ( KiB/s): min=24496, max=26088, per=99.88%, avg=25566.00, stdev=725.80, samples=4 00:14:03.597 iops : min= 6124, max= 6522, avg=6391.50, stdev=181.45, samples=4 00:14:03.597 write: IOPS=6400, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2008msec); 0 zone resets 00:14:03.597 slat (usec): min=2, max=260, avg= 2.98, stdev= 2.92 00:14:03.597 clat (usec): min=2469, max=16381, avg=9480.09, stdev=810.84 00:14:03.597 lat (usec): min=2483, max=16383, avg=9483.06, stdev=810.71 00:14:03.597 clat percentiles (usec): 00:14:03.597 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:14:03.597 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:14:03.597 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:14:03.597 | 99.00th=[11338], 99.50th=[11600], 99.90th=[14615], 99.95th=[15008], 00:14:03.597 | 99.99th=[16319] 00:14:03.597 bw ( KiB/s): min=25328, max=25944, per=99.93%, avg=25586.00, stdev=258.77, samples=4 00:14:03.597 iops : min= 6332, max= 6486, avg=6396.50, stdev=64.69, samples=4 00:14:03.597 lat (msec) : 4=0.05%, 10=52.60%, 20=47.35% 00:14:03.597 cpu : usr=71.10%, sys=22.32%, ctx=7, majf=0, minf=14 00:14:03.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:03.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.597 issued rwts: total=12850,12853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.597 00:14:03.597 Run status group 0 (all jobs): 00:14:03.597 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.6MB), run=2008-2008msec 00:14:03.597 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.6MB), run=2008-2008msec 00:14:03.597 18:12:21 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:03.855 18:12:22 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:04.114 18:12:22 -- host/fio.sh@64 -- # ls_nested_guid=a9a2760a-6192-4782-b3df-b117a6f91281 00:14:04.114 18:12:22 -- host/fio.sh@65 -- # get_lvs_free_mb a9a2760a-6192-4782-b3df-b117a6f91281 00:14:04.114 18:12:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=a9a2760a-6192-4782-b3df-b117a6f91281 00:14:04.114 18:12:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:04.114 18:12:22 -- common/autotest_common.sh@1355 -- # local fc 00:14:04.114 18:12:22 -- common/autotest_common.sh@1356 -- # local cs 00:14:04.114 18:12:22 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:04.372 18:12:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:04.372 { 00:14:04.372 "uuid": "8a7fe143-200c-41ce-b3d0-c14f786b490d", 00:14:04.372 "name": "lvs_0", 00:14:04.372 "base_bdev": "Nvme0n1", 00:14:04.372 "total_data_clusters": 4, 00:14:04.373 "free_clusters": 0, 00:14:04.373 "block_size": 4096, 00:14:04.373 "cluster_size": 1073741824 00:14:04.373 }, 00:14:04.373 { 00:14:04.373 "uuid": "a9a2760a-6192-4782-b3df-b117a6f91281", 00:14:04.373 "name": "lvs_n_0", 00:14:04.373 "base_bdev": "6f72ddc8-a968-47bd-86f2-c725d68b5a06", 00:14:04.373 "total_data_clusters": 1022, 00:14:04.373 "free_clusters": 1022, 00:14:04.373 "block_size": 4096, 00:14:04.373 "cluster_size": 4194304 00:14:04.373 } 00:14:04.373 ]' 00:14:04.373 18:12:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="a9a2760a-6192-4782-b3df-b117a6f91281") .free_clusters' 00:14:04.373 18:12:22 -- common/autotest_common.sh@1358 -- # fc=1022 00:14:04.373 18:12:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="a9a2760a-6192-4782-b3df-b117a6f91281") .cluster_size' 00:14:04.373 18:12:22 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:04.373 18:12:22 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:14:04.373 4088 00:14:04.373 18:12:22 -- common/autotest_common.sh@1363 -- # echo 4088 00:14:04.373 18:12:22 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:04.631 c9a4f30e-50fc-4971-9d1c-00ab838ff48e 00:14:04.631 18:12:23 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:04.890 18:12:23 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:05.148 18:12:23 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:05.407 18:12:23 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:05.407 18:12:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:05.407 18:12:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:05.407 18:12:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:05.407 18:12:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:05.407 18:12:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.407 18:12:23 -- common/autotest_common.sh@1330 -- # shift 00:14:05.407 18:12:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:05.407 18:12:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:05.407 18:12:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:05.407 18:12:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:05.407 18:12:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:05.407 18:12:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:05.407 18:12:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:05.407 18:12:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:05.666 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:05.666 fio-3.35 00:14:05.666 Starting 1 thread 00:14:08.198 00:14:08.198 test: (groupid=0, jobs=1): err= 0: pid=69795: Mon Nov 18 18:12:26 2024 00:14:08.198 read: IOPS=5809, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2008msec) 00:14:08.198 slat (usec): min=2, max=320, avg= 2.69, stdev= 3.96 00:14:08.198 clat (usec): min=3253, max=19559, avg=11544.54, stdev=966.97 00:14:08.198 lat (usec): min=3263, max=19561, avg=11547.23, stdev=966.66 00:14:08.198 clat percentiles (usec): 00:14:08.198 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:14:08.198 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:14:08.198 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:14:08.198 | 99.00th=[13698], 99.50th=[14091], 99.90th=[17957], 99.95th=[19268], 00:14:08.198 | 99.99th=[19530] 00:14:08.198 bw ( KiB/s): min=22248, max=23752, per=99.84%, avg=23202.00, stdev=661.10, samples=4 00:14:08.198 iops : min= 5562, max= 5938, avg=5800.50, stdev=165.27, samples=4 00:14:08.198 write: IOPS=5796, BW=22.6MiB/s (23.7MB/s)(45.5MiB/2008msec); 0 zone resets 00:14:08.198 slat (usec): min=2, max=279, avg= 2.81, stdev= 3.24 00:14:08.198 clat (usec): min=2478, max=17871, avg=10446.06, stdev=889.78 00:14:08.198 lat (usec): min=2492, max=17874, avg=10448.87, stdev=889.67 00:14:08.198 clat percentiles (usec): 00:14:08.198 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:14:08.198 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:14:08.198 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:14:08.198 | 99.00th=[12387], 99.50th=[12780], 99.90th=[16188], 99.95th=[16581], 00:14:08.198 | 99.99th=[17695] 00:14:08.198 bw ( KiB/s): min=23040, max=23296, per=99.87%, avg=23154.00, stdev=109.76, samples=4 00:14:08.198 iops : min= 5760, max= 5824, avg=5788.50, stdev=27.44, samples=4 00:14:08.198 lat (msec) : 4=0.04%, 10=16.50%, 20=83.46% 00:14:08.198 cpu : usr=73.49%, sys=20.98%, ctx=9, majf=0, minf=14 00:14:08.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:08.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:08.198 issued rwts: total=11666,11639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:08.198 00:14:08.198 Run status group 0 (all jobs): 00:14:08.198 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.8MB), run=2008-2008msec 00:14:08.198 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.5MiB (47.7MB), run=2008-2008msec 00:14:08.199 18:12:26 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:08.199 18:12:26 -- host/fio.sh@74 -- # sync 00:14:08.199 18:12:26 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:08.457 18:12:26 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:08.716 18:12:27 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:08.974 18:12:27 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:09.231 18:12:27 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:09.799 18:12:28 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:09.799 18:12:28 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:09.799 18:12:28 -- host/fio.sh@86 -- # nvmftestfini 00:14:09.799 18:12:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:09.799 18:12:28 -- nvmf/common.sh@116 -- # sync 00:14:09.799 18:12:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:09.799 18:12:28 -- nvmf/common.sh@119 -- # set +e 00:14:09.799 18:12:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:09.799 18:12:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:09.799 rmmod nvme_tcp 00:14:09.799 rmmod nvme_fabrics 00:14:09.799 rmmod nvme_keyring 00:14:09.799 18:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:09.799 18:12:28 -- nvmf/common.sh@123 -- # set -e 00:14:09.799 18:12:28 -- nvmf/common.sh@124 -- # return 0 00:14:09.799 18:12:28 -- nvmf/common.sh@477 -- # '[' -n 69485 ']' 00:14:09.799 18:12:28 -- nvmf/common.sh@478 -- # killprocess 69485 00:14:09.799 18:12:28 -- common/autotest_common.sh@936 -- # '[' -z 69485 ']' 00:14:09.799 18:12:28 -- common/autotest_common.sh@940 -- # kill -0 69485 00:14:09.799 18:12:28 -- common/autotest_common.sh@941 -- # uname 00:14:09.799 18:12:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.799 18:12:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69485 00:14:09.799 killing process with pid 69485 00:14:09.799 18:12:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.799 18:12:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.799 18:12:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69485' 00:14:09.799 18:12:28 -- common/autotest_common.sh@955 -- # kill 69485 00:14:09.799 18:12:28 -- common/autotest_common.sh@960 -- # wait 69485 00:14:10.059 18:12:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:10.059 18:12:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:10.059 18:12:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:10.059 18:12:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.059 18:12:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:10.059 18:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.059 18:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.059 18:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.059 18:12:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:10.059 00:14:10.059 real 0m19.253s 00:14:10.059 user 1m24.853s 00:14:10.059 sys 0m4.296s 00:14:10.059 18:12:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:10.059 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.059 ************************************ 00:14:10.060 END TEST nvmf_fio_host 00:14:10.060 ************************************ 00:14:10.060 18:12:28 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:10.060 18:12:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:10.060 18:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.060 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.060 ************************************ 00:14:10.060 START TEST nvmf_failover 00:14:10.060 ************************************ 00:14:10.060 18:12:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:10.060 * Looking for test storage... 00:14:10.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:10.060 18:12:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:10.060 18:12:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:10.060 18:12:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:10.318 18:12:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:10.318 18:12:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:10.318 18:12:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:10.318 18:12:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:10.318 18:12:28 -- scripts/common.sh@335 -- # IFS=.-: 00:14:10.318 18:12:28 -- scripts/common.sh@335 -- # read -ra ver1 00:14:10.318 18:12:28 -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.318 18:12:28 -- scripts/common.sh@336 -- # read -ra ver2 00:14:10.318 18:12:28 -- scripts/common.sh@337 -- # local 'op=<' 00:14:10.318 18:12:28 -- scripts/common.sh@339 -- # ver1_l=2 00:14:10.318 18:12:28 -- scripts/common.sh@340 -- # ver2_l=1 00:14:10.318 18:12:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:10.318 18:12:28 -- scripts/common.sh@343 -- # case "$op" in 00:14:10.318 18:12:28 -- scripts/common.sh@344 -- # : 1 00:14:10.318 18:12:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:10.318 18:12:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.318 18:12:28 -- scripts/common.sh@364 -- # decimal 1 00:14:10.318 18:12:28 -- scripts/common.sh@352 -- # local d=1 00:14:10.318 18:12:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.318 18:12:28 -- scripts/common.sh@354 -- # echo 1 00:14:10.318 18:12:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:10.318 18:12:28 -- scripts/common.sh@365 -- # decimal 2 00:14:10.318 18:12:28 -- scripts/common.sh@352 -- # local d=2 00:14:10.318 18:12:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.318 18:12:28 -- scripts/common.sh@354 -- # echo 2 00:14:10.318 18:12:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:10.318 18:12:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:10.318 18:12:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:10.318 18:12:28 -- scripts/common.sh@367 -- # return 0 00:14:10.318 18:12:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.318 18:12:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.318 --rc genhtml_branch_coverage=1 00:14:10.318 --rc genhtml_function_coverage=1 00:14:10.318 --rc genhtml_legend=1 00:14:10.318 --rc geninfo_all_blocks=1 00:14:10.318 --rc geninfo_unexecuted_blocks=1 00:14:10.318 00:14:10.318 ' 00:14:10.318 18:12:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.318 --rc genhtml_branch_coverage=1 00:14:10.318 --rc genhtml_function_coverage=1 00:14:10.318 --rc genhtml_legend=1 00:14:10.318 --rc geninfo_all_blocks=1 00:14:10.318 --rc geninfo_unexecuted_blocks=1 00:14:10.318 00:14:10.318 ' 00:14:10.318 18:12:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.318 --rc genhtml_branch_coverage=1 00:14:10.318 --rc genhtml_function_coverage=1 00:14:10.318 --rc genhtml_legend=1 00:14:10.318 --rc geninfo_all_blocks=1 00:14:10.318 --rc geninfo_unexecuted_blocks=1 00:14:10.318 00:14:10.318 ' 00:14:10.318 18:12:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:10.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.318 --rc genhtml_branch_coverage=1 00:14:10.318 --rc genhtml_function_coverage=1 00:14:10.318 --rc genhtml_legend=1 00:14:10.318 --rc geninfo_all_blocks=1 00:14:10.318 --rc geninfo_unexecuted_blocks=1 00:14:10.318 00:14:10.318 ' 00:14:10.318 18:12:28 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.318 18:12:28 -- nvmf/common.sh@7 -- # uname -s 00:14:10.318 18:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.318 18:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.318 18:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.318 18:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.318 18:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.318 18:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.318 18:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.318 18:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.318 18:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.318 18:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.318 18:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:10.318 18:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:10.318 18:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.318 18:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.318 18:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.319 18:12:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.319 18:12:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.319 18:12:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.319 18:12:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.319 18:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.319 18:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.319 18:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.319 18:12:28 -- paths/export.sh@5 -- # export PATH 00:14:10.319 18:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.319 18:12:28 -- nvmf/common.sh@46 -- # : 0 00:14:10.319 18:12:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:10.319 18:12:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:10.319 18:12:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:10.319 18:12:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.319 18:12:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.319 18:12:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:10.319 18:12:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:10.319 18:12:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:10.319 18:12:28 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.319 18:12:28 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.319 18:12:28 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.319 18:12:28 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.319 18:12:28 -- host/failover.sh@18 -- # nvmftestinit 00:14:10.319 18:12:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:10.319 18:12:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.319 18:12:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:10.319 18:12:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:10.319 18:12:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:10.319 18:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.319 18:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.319 18:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.319 18:12:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:10.319 18:12:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:10.319 18:12:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:10.319 18:12:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:10.319 18:12:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:10.319 18:12:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:10.319 18:12:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.319 18:12:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.319 18:12:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.319 18:12:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:10.319 18:12:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.319 18:12:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.319 18:12:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.319 18:12:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.319 18:12:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.319 18:12:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.319 18:12:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.319 18:12:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.319 18:12:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:10.319 18:12:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:10.319 Cannot find device "nvmf_tgt_br" 00:14:10.319 18:12:28 -- nvmf/common.sh@154 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.319 Cannot find device "nvmf_tgt_br2" 00:14:10.319 18:12:28 -- nvmf/common.sh@155 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:10.319 18:12:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:10.319 Cannot find device "nvmf_tgt_br" 00:14:10.319 18:12:28 -- nvmf/common.sh@157 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:10.319 Cannot find device "nvmf_tgt_br2" 00:14:10.319 18:12:28 -- nvmf/common.sh@158 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:10.319 18:12:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:10.319 18:12:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.319 18:12:28 -- nvmf/common.sh@161 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.319 18:12:28 -- nvmf/common.sh@162 -- # true 00:14:10.319 18:12:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.319 18:12:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.319 18:12:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.319 18:12:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.319 18:12:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.578 18:12:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.578 18:12:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.578 18:12:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.578 18:12:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.578 18:12:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:10.578 18:12:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:10.578 18:12:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:10.578 18:12:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:10.578 18:12:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.578 18:12:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.578 18:12:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.578 18:12:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:10.578 18:12:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:10.578 18:12:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.578 18:12:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.578 18:12:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.578 18:12:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.578 18:12:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.578 18:12:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:10.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:14:10.578 00:14:10.578 --- 10.0.0.2 ping statistics --- 00:14:10.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.578 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:10.578 18:12:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:10.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:10.578 00:14:10.578 --- 10.0.0.3 ping statistics --- 00:14:10.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.578 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:10.578 18:12:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:10.578 00:14:10.578 --- 10.0.0.1 ping statistics --- 00:14:10.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.578 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:10.578 18:12:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.578 18:12:29 -- nvmf/common.sh@421 -- # return 0 00:14:10.578 18:12:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:10.578 18:12:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.578 18:12:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:10.578 18:12:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:10.578 18:12:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.578 18:12:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:10.578 18:12:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:10.578 18:12:29 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:10.578 18:12:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:10.578 18:12:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.578 18:12:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.578 18:12:29 -- nvmf/common.sh@469 -- # nvmfpid=70047 00:14:10.578 18:12:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:10.578 18:12:29 -- nvmf/common.sh@470 -- # waitforlisten 70047 00:14:10.578 18:12:29 -- common/autotest_common.sh@829 -- # '[' -z 70047 ']' 00:14:10.578 18:12:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.578 18:12:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.578 18:12:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.578 18:12:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.578 18:12:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.578 [2024-11-18 18:12:29.141745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:10.578 [2024-11-18 18:12:29.141830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.836 [2024-11-18 18:12:29.276656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.836 [2024-11-18 18:12:29.329156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:10.836 [2024-11-18 18:12:29.329301] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.836 [2024-11-18 18:12:29.329313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.836 [2024-11-18 18:12:29.329321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.836 [2024-11-18 18:12:29.329428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.836 [2024-11-18 18:12:29.329574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.836 [2024-11-18 18:12:29.329575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.771 18:12:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.771 18:12:30 -- common/autotest_common.sh@862 -- # return 0 00:14:11.771 18:12:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:11.771 18:12:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.771 18:12:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 18:12:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.771 18:12:30 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:12.030 [2024-11-18 18:12:30.502994] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.030 18:12:30 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:12.289 Malloc0 00:14:12.289 18:12:30 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.547 18:12:31 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.806 18:12:31 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.064 [2024-11-18 18:12:31.550246] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.064 18:12:31 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:13.323 [2024-11-18 18:12:31.794532] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:13.323 18:12:31 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:13.581 [2024-11-18 18:12:32.014786] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:13.581 18:12:32 -- host/failover.sh@31 -- # bdevperf_pid=70105 00:14:13.581 18:12:32 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:13.581 18:12:32 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.581 18:12:32 -- host/failover.sh@34 -- # waitforlisten 70105 /var/tmp/bdevperf.sock 00:14:13.581 18:12:32 -- common/autotest_common.sh@829 -- # '[' -z 70105 ']' 00:14:13.581 18:12:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.581 18:12:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.581 18:12:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.581 18:12:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.581 18:12:32 -- common/autotest_common.sh@10 -- # set +x 00:14:14.518 18:12:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.518 18:12:33 -- common/autotest_common.sh@862 -- # return 0 00:14:14.518 18:12:33 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:14.777 NVMe0n1 00:14:14.777 18:12:33 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:15.345 00:14:15.345 18:12:33 -- host/failover.sh@39 -- # run_test_pid=70134 00:14:15.345 18:12:33 -- host/failover.sh@41 -- # sleep 1 00:14:15.345 18:12:33 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.280 18:12:34 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.539 [2024-11-18 18:12:34.900924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 [2024-11-18 18:12:34.901155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204fd00 is same with the state(5) to be set 00:14:16.539 18:12:34 -- host/failover.sh@45 -- # sleep 3 00:14:19.824 18:12:37 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:19.824 00:14:19.824 18:12:38 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:20.083 [2024-11-18 18:12:38.525297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525451] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 [2024-11-18 18:12:38.525592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20503c0 is same with the state(5) to be set 00:14:20.083 18:12:38 -- host/failover.sh@50 -- # sleep 3 00:14:23.376 18:12:41 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.376 [2024-11-18 18:12:41.798983] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.376 18:12:41 -- host/failover.sh@55 -- # sleep 1 00:14:24.326 18:12:42 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:24.585 [2024-11-18 18:12:43.074429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 [2024-11-18 18:12:43.074627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e9f0 is same with the state(5) to be set 00:14:24.585 18:12:43 -- host/failover.sh@59 -- # wait 70134 00:14:31.158 0 00:14:31.158 18:12:48 -- host/failover.sh@61 -- # killprocess 70105 00:14:31.158 18:12:48 -- common/autotest_common.sh@936 -- # '[' -z 70105 ']' 00:14:31.158 18:12:48 -- common/autotest_common.sh@940 -- # kill -0 70105 00:14:31.158 18:12:48 -- common/autotest_common.sh@941 -- # uname 00:14:31.158 18:12:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.159 18:12:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70105 00:14:31.159 killing process with pid 70105 00:14:31.159 18:12:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.159 18:12:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.159 18:12:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70105' 00:14:31.159 18:12:48 -- common/autotest_common.sh@955 -- # kill 70105 00:14:31.159 18:12:48 -- common/autotest_common.sh@960 -- # wait 70105 00:14:31.159 18:12:49 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:31.159 [2024-11-18 18:12:32.100614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:31.159 [2024-11-18 18:12:32.100773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:14:31.159 [2024-11-18 18:12:32.240123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.159 [2024-11-18 18:12:32.313296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.159 Running I/O for 15 seconds... 00:14:31.159 [2024-11-18 18:12:34.901233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.901975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.901989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.159 [2024-11-18 18:12:34.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.159 [2024-11-18 18:12:34.902128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.159 [2024-11-18 18:12:34.902162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.159 [2024-11-18 18:12:34.902306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.159 [2024-11-18 18:12:34.902339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.159 [2024-11-18 18:12:34.902616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.159 [2024-11-18 18:12:34.902634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.902665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.902697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.902754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.902787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.902818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.902850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.902915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.902977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.902994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.160 [2024-11-18 18:12:34.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.160 [2024-11-18 18:12:34.903958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.160 [2024-11-18 18:12:34.903975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.904600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.904939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.904985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.905000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.905030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.161 [2024-11-18 18:12:34.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.161 [2024-11-18 18:12:34.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.161 [2024-11-18 18:12:34.905349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.162 [2024-11-18 18:12:34.905380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.162 [2024-11-18 18:12:34.905441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.162 [2024-11-18 18:12:34.905472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.162 [2024-11-18 18:12:34.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.162 [2024-11-18 18:12:34.905564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:34.905807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f1970 is same with the state(5) to be set 00:14:31.162 [2024-11-18 18:12:34.905843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.162 [2024-11-18 18:12:34.905854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.162 [2024-11-18 18:12:34.905864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121512 len:8 PRP1 0x0 PRP2 0x0 00:14:31.162 [2024-11-18 18:12:34.905878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.905926] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6f1970 was disconnected and freed. reset controller. 00:14:31.162 [2024-11-18 18:12:34.905945] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:31.162 [2024-11-18 18:12:34.906003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.162 [2024-11-18 18:12:34.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.906043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.162 [2024-11-18 18:12:34.906059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.906073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.162 [2024-11-18 18:12:34.906114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.906132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.162 [2024-11-18 18:12:34.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:34.906163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:31.162 [2024-11-18 18:12:34.906224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e690 (9): Bad file descriptor 00:14:31.162 [2024-11-18 18:12:34.908512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:31.162 [2024-11-18 18:12:34.943777] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:31.162 [2024-11-18 18:12:38.525696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.525970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.525987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.162 [2024-11-18 18:12:38.526341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.162 [2024-11-18 18:12:38.526359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.526690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.526721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.526792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.526976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.526991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.163 [2024-11-18 18:12:38.527358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.163 [2024-11-18 18:12:38.527448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.163 [2024-11-18 18:12:38.527464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.527959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.527975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.527990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.164 [2024-11-18 18:12:38.528414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.164 [2024-11-18 18:12:38.528682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.164 [2024-11-18 18:12:38.528697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.528736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.528766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.528796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.528826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.528856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.528916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.528946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.528977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.528992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.165 [2024-11-18 18:12:38.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.165 [2024-11-18 18:12:38.529941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.165 [2024-11-18 18:12:38.529964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d6450 is same with the state(5) to be set 00:14:31.165 [2024-11-18 18:12:38.529982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.165 [2024-11-18 18:12:38.529994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.166 [2024-11-18 18:12:38.530005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125136 len:8 PRP1 0x0 PRP2 0x0 00:14:31.166 [2024-11-18 18:12:38.530018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:38.530064] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6d6450 was disconnected and freed. reset controller. 00:14:31.166 [2024-11-18 18:12:38.530084] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:31.166 [2024-11-18 18:12:38.530165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.166 [2024-11-18 18:12:38.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:38.530204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.166 [2024-11-18 18:12:38.530219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:38.530234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.166 [2024-11-18 18:12:38.530249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:38.530265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.166 [2024-11-18 18:12:38.530279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:38.530294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:31.166 [2024-11-18 18:12:38.530342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e690 (9): Bad file descriptor 00:14:31.166 [2024-11-18 18:12:38.532725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:31.166 [2024-11-18 18:12:38.566806] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:31.166 [2024-11-18 18:12:43.074692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.074793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.074881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.074938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.074989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.166 [2024-11-18 18:12:43.075842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.166 [2024-11-18 18:12:43.075858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.166 [2024-11-18 18:12:43.075874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.075891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.075906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.075922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.075937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.075969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.075984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.076930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.076962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.076979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.077008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.077040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.077057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.077073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.077089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.167 [2024-11-18 18:12:43.077105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.077128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.167 [2024-11-18 18:12:43.077144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.167 [2024-11-18 18:12:43.077161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.077897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.077969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.168 [2024-11-18 18:12:43.078354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.168 [2024-11-18 18:12:43.078371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.168 [2024-11-18 18:12:43.078411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:31.169 [2024-11-18 18:12:43.078953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.078969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.078985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.169 [2024-11-18 18:12:43.079189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7038e0 is same with the state(5) to be set 00:14:31.169 [2024-11-18 18:12:43.079222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:31.169 [2024-11-18 18:12:43.079233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:31.169 [2024-11-18 18:12:43.079243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101000 len:8 PRP1 0x0 PRP2 0x0 00:14:31.169 [2024-11-18 18:12:43.079257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079303] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7038e0 was disconnected and freed. reset controller. 00:14:31.169 [2024-11-18 18:12:43.079331] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:31.169 [2024-11-18 18:12:43.079386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.169 [2024-11-18 18:12:43.079409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.169 [2024-11-18 18:12:43.079440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.169 [2024-11-18 18:12:43.079469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.169 [2024-11-18 18:12:43.079497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.169 [2024-11-18 18:12:43.079511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:31.169 [2024-11-18 18:12:43.079560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e690 (9): Bad file descriptor 00:14:31.169 [2024-11-18 18:12:43.081743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:31.169 [2024-11-18 18:12:43.107293] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:31.169 00:14:31.169 Latency(us) 00:14:31.169 [2024-11-18T18:12:49.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.169 [2024-11-18T18:12:49.773Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:31.169 Verification LBA range: start 0x0 length 0x4000 00:14:31.169 NVMe0n1 : 15.01 13391.19 52.31 321.80 0.00 9315.87 476.63 14417.92 00:14:31.169 [2024-11-18T18:12:49.773Z] =================================================================================================================== 00:14:31.169 [2024-11-18T18:12:49.773Z] Total : 13391.19 52.31 321.80 0.00 9315.87 476.63 14417.92 00:14:31.169 Received shutdown signal, test time was about 15.000000 seconds 00:14:31.169 00:14:31.169 Latency(us) 00:14:31.169 [2024-11-18T18:12:49.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.169 [2024-11-18T18:12:49.773Z] =================================================================================================================== 00:14:31.169 [2024-11-18T18:12:49.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.169 18:12:49 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:31.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.169 18:12:49 -- host/failover.sh@65 -- # count=3 00:14:31.169 18:12:49 -- host/failover.sh@67 -- # (( count != 3 )) 00:14:31.169 18:12:49 -- host/failover.sh@73 -- # bdevperf_pid=70308 00:14:31.169 18:12:49 -- host/failover.sh@75 -- # waitforlisten 70308 /var/tmp/bdevperf.sock 00:14:31.169 18:12:49 -- common/autotest_common.sh@829 -- # '[' -z 70308 ']' 00:14:31.169 18:12:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.169 18:12:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.169 18:12:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.169 18:12:49 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:31.169 18:12:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.169 18:12:49 -- common/autotest_common.sh@10 -- # set +x 00:14:31.169 18:12:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.170 18:12:49 -- common/autotest_common.sh@862 -- # return 0 00:14:31.170 18:12:49 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:31.170 [2024-11-18 18:12:49.600120] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:31.170 18:12:49 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:31.428 [2024-11-18 18:12:49.880455] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:31.428 18:12:49 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:31.687 NVMe0n1 00:14:31.687 18:12:50 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:31.946 00:14:31.946 18:12:50 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:32.204 00:14:32.463 18:12:50 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:32.463 18:12:50 -- host/failover.sh@82 -- # grep -q NVMe0 00:14:32.721 18:12:51 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:32.980 18:12:51 -- host/failover.sh@87 -- # sleep 3 00:14:36.265 18:12:54 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:36.265 18:12:54 -- host/failover.sh@88 -- # grep -q NVMe0 00:14:36.265 18:12:54 -- host/failover.sh@90 -- # run_test_pid=70378 00:14:36.265 18:12:54 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.265 18:12:54 -- host/failover.sh@92 -- # wait 70378 00:14:37.202 0 00:14:37.202 18:12:55 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:37.202 [2024-11-18 18:12:49.097200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:37.202 [2024-11-18 18:12:49.097306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70308 ] 00:14:37.202 [2024-11-18 18:12:49.236195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.202 [2024-11-18 18:12:49.295497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.202 [2024-11-18 18:12:51.326875] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:37.202 [2024-11-18 18:12:51.327010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.202 [2024-11-18 18:12:51.327035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.202 [2024-11-18 18:12:51.327069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.202 [2024-11-18 18:12:51.327082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.202 [2024-11-18 18:12:51.327096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.202 [2024-11-18 18:12:51.327109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.202 [2024-11-18 18:12:51.327123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.202 [2024-11-18 18:12:51.327136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.202 [2024-11-18 18:12:51.327149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:37.202 [2024-11-18 18:12:51.327199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:37.202 [2024-11-18 18:12:51.327232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300690 (9): Bad file descriptor 00:14:37.202 [2024-11-18 18:12:51.336490] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:37.202 Running I/O for 1 seconds... 00:14:37.202 00:14:37.202 Latency(us) 00:14:37.202 [2024-11-18T18:12:55.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.202 [2024-11-18T18:12:55.806Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:37.202 Verification LBA range: start 0x0 length 0x4000 00:14:37.202 NVMe0n1 : 1.01 13316.29 52.02 0.00 0.00 9559.78 960.70 17754.30 00:14:37.202 [2024-11-18T18:12:55.806Z] =================================================================================================================== 00:14:37.202 [2024-11-18T18:12:55.806Z] Total : 13316.29 52.02 0.00 0.00 9559.78 960.70 17754.30 00:14:37.202 18:12:55 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:37.202 18:12:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:14:37.462 18:12:56 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:37.722 18:12:56 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:37.722 18:12:56 -- host/failover.sh@99 -- # grep -q NVMe0 00:14:38.295 18:12:56 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:38.295 18:12:56 -- host/failover.sh@101 -- # sleep 3 00:14:41.581 18:12:59 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:41.581 18:12:59 -- host/failover.sh@103 -- # grep -q NVMe0 00:14:41.581 18:13:00 -- host/failover.sh@108 -- # killprocess 70308 00:14:41.581 18:13:00 -- common/autotest_common.sh@936 -- # '[' -z 70308 ']' 00:14:41.581 18:13:00 -- common/autotest_common.sh@940 -- # kill -0 70308 00:14:41.581 18:13:00 -- common/autotest_common.sh@941 -- # uname 00:14:41.581 18:13:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.581 18:13:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70308 00:14:41.840 killing process with pid 70308 00:14:41.840 18:13:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:41.840 18:13:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:41.840 18:13:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70308' 00:14:41.840 18:13:00 -- common/autotest_common.sh@955 -- # kill 70308 00:14:41.840 18:13:00 -- common/autotest_common.sh@960 -- # wait 70308 00:14:41.840 18:13:00 -- host/failover.sh@110 -- # sync 00:14:41.840 18:13:00 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.098 18:13:00 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:42.098 18:13:00 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:42.098 18:13:00 -- host/failover.sh@116 -- # nvmftestfini 00:14:42.098 18:13:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:42.098 18:13:00 -- nvmf/common.sh@116 -- # sync 00:14:42.357 18:13:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:42.357 18:13:00 -- nvmf/common.sh@119 -- # set +e 00:14:42.357 18:13:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:42.357 18:13:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:42.357 rmmod nvme_tcp 00:14:42.357 rmmod nvme_fabrics 00:14:42.357 rmmod nvme_keyring 00:14:42.357 18:13:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:42.357 18:13:00 -- nvmf/common.sh@123 -- # set -e 00:14:42.357 18:13:00 -- nvmf/common.sh@124 -- # return 0 00:14:42.357 18:13:00 -- nvmf/common.sh@477 -- # '[' -n 70047 ']' 00:14:42.357 18:13:00 -- nvmf/common.sh@478 -- # killprocess 70047 00:14:42.357 18:13:00 -- common/autotest_common.sh@936 -- # '[' -z 70047 ']' 00:14:42.357 18:13:00 -- common/autotest_common.sh@940 -- # kill -0 70047 00:14:42.357 18:13:00 -- common/autotest_common.sh@941 -- # uname 00:14:42.357 18:13:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.357 18:13:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70047 00:14:42.357 killing process with pid 70047 00:14:42.357 18:13:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:42.357 18:13:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:42.357 18:13:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70047' 00:14:42.357 18:13:00 -- common/autotest_common.sh@955 -- # kill 70047 00:14:42.357 18:13:00 -- common/autotest_common.sh@960 -- # wait 70047 00:14:42.615 18:13:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:42.615 18:13:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:42.615 18:13:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:42.615 18:13:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.615 18:13:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:42.615 18:13:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.615 18:13:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.615 18:13:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.615 18:13:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:42.615 00:14:42.615 real 0m32.444s 00:14:42.615 user 2m5.759s 00:14:42.615 sys 0m5.567s 00:14:42.615 18:13:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:42.615 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:42.615 ************************************ 00:14:42.615 END TEST nvmf_failover 00:14:42.615 ************************************ 00:14:42.615 18:13:01 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:42.615 18:13:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:42.615 18:13:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.615 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:42.615 ************************************ 00:14:42.615 START TEST nvmf_discovery 00:14:42.615 ************************************ 00:14:42.615 18:13:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:14:42.615 * Looking for test storage... 00:14:42.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:42.615 18:13:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:42.615 18:13:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:42.615 18:13:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:42.615 18:13:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:42.615 18:13:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:42.615 18:13:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:42.615 18:13:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:42.615 18:13:01 -- scripts/common.sh@335 -- # IFS=.-: 00:14:42.615 18:13:01 -- scripts/common.sh@335 -- # read -ra ver1 00:14:42.615 18:13:01 -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.615 18:13:01 -- scripts/common.sh@336 -- # read -ra ver2 00:14:42.615 18:13:01 -- scripts/common.sh@337 -- # local 'op=<' 00:14:42.615 18:13:01 -- scripts/common.sh@339 -- # ver1_l=2 00:14:42.615 18:13:01 -- scripts/common.sh@340 -- # ver2_l=1 00:14:42.615 18:13:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:42.615 18:13:01 -- scripts/common.sh@343 -- # case "$op" in 00:14:42.615 18:13:01 -- scripts/common.sh@344 -- # : 1 00:14:42.615 18:13:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:42.615 18:13:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.615 18:13:01 -- scripts/common.sh@364 -- # decimal 1 00:14:42.615 18:13:01 -- scripts/common.sh@352 -- # local d=1 00:14:42.615 18:13:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.615 18:13:01 -- scripts/common.sh@354 -- # echo 1 00:14:42.615 18:13:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:42.873 18:13:01 -- scripts/common.sh@365 -- # decimal 2 00:14:42.873 18:13:01 -- scripts/common.sh@352 -- # local d=2 00:14:42.873 18:13:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.873 18:13:01 -- scripts/common.sh@354 -- # echo 2 00:14:42.873 18:13:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:42.873 18:13:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:42.873 18:13:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:42.873 18:13:01 -- scripts/common.sh@367 -- # return 0 00:14:42.873 18:13:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.873 18:13:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:42.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.873 --rc genhtml_branch_coverage=1 00:14:42.873 --rc genhtml_function_coverage=1 00:14:42.873 --rc genhtml_legend=1 00:14:42.873 --rc geninfo_all_blocks=1 00:14:42.873 --rc geninfo_unexecuted_blocks=1 00:14:42.873 00:14:42.873 ' 00:14:42.873 18:13:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:42.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.873 --rc genhtml_branch_coverage=1 00:14:42.873 --rc genhtml_function_coverage=1 00:14:42.873 --rc genhtml_legend=1 00:14:42.873 --rc geninfo_all_blocks=1 00:14:42.873 --rc geninfo_unexecuted_blocks=1 00:14:42.873 00:14:42.873 ' 00:14:42.873 18:13:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:42.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.873 --rc genhtml_branch_coverage=1 00:14:42.873 --rc genhtml_function_coverage=1 00:14:42.873 --rc genhtml_legend=1 00:14:42.873 --rc geninfo_all_blocks=1 00:14:42.873 --rc geninfo_unexecuted_blocks=1 00:14:42.873 00:14:42.873 ' 00:14:42.873 18:13:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:42.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.873 --rc genhtml_branch_coverage=1 00:14:42.873 --rc genhtml_function_coverage=1 00:14:42.873 --rc genhtml_legend=1 00:14:42.873 --rc geninfo_all_blocks=1 00:14:42.873 --rc geninfo_unexecuted_blocks=1 00:14:42.873 00:14:42.873 ' 00:14:42.873 18:13:01 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.873 18:13:01 -- nvmf/common.sh@7 -- # uname -s 00:14:42.873 18:13:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.873 18:13:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.873 18:13:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.873 18:13:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.873 18:13:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.873 18:13:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.873 18:13:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.873 18:13:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.873 18:13:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.873 18:13:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.873 18:13:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:42.873 18:13:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:42.873 18:13:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.873 18:13:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.873 18:13:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:42.873 18:13:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.873 18:13:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.873 18:13:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.873 18:13:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.873 18:13:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.873 18:13:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.873 18:13:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.873 18:13:01 -- paths/export.sh@5 -- # export PATH 00:14:42.873 18:13:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.873 18:13:01 -- nvmf/common.sh@46 -- # : 0 00:14:42.873 18:13:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.873 18:13:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.873 18:13:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.873 18:13:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.873 18:13:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.873 18:13:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.873 18:13:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.873 18:13:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.873 18:13:01 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:14:42.873 18:13:01 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:14:42.873 18:13:01 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:42.873 18:13:01 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:14:42.873 18:13:01 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:14:42.873 18:13:01 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:14:42.873 18:13:01 -- host/discovery.sh@25 -- # nvmftestinit 00:14:42.873 18:13:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:42.873 18:13:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.873 18:13:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:42.873 18:13:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:42.873 18:13:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:42.873 18:13:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.873 18:13:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.873 18:13:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.873 18:13:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:42.873 18:13:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:42.873 18:13:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:42.874 18:13:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:42.874 18:13:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:42.874 18:13:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:42.874 18:13:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.874 18:13:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.874 18:13:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.874 18:13:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:42.874 18:13:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.874 18:13:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.874 18:13:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.874 18:13:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.874 18:13:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.874 18:13:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.874 18:13:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.874 18:13:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.874 18:13:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:42.874 18:13:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:42.874 Cannot find device "nvmf_tgt_br" 00:14:42.874 18:13:01 -- nvmf/common.sh@154 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.874 Cannot find device "nvmf_tgt_br2" 00:14:42.874 18:13:01 -- nvmf/common.sh@155 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:42.874 18:13:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:42.874 Cannot find device "nvmf_tgt_br" 00:14:42.874 18:13:01 -- nvmf/common.sh@157 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:42.874 Cannot find device "nvmf_tgt_br2" 00:14:42.874 18:13:01 -- nvmf/common.sh@158 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:42.874 18:13:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:42.874 18:13:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.874 18:13:01 -- nvmf/common.sh@161 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.874 18:13:01 -- nvmf/common.sh@162 -- # true 00:14:42.874 18:13:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.874 18:13:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.874 18:13:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.874 18:13:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.874 18:13:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.874 18:13:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.874 18:13:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.874 18:13:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.874 18:13:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.874 18:13:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:42.874 18:13:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:43.133 18:13:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:43.133 18:13:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:43.133 18:13:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.133 18:13:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.133 18:13:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.133 18:13:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:43.133 18:13:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:43.133 18:13:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.133 18:13:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.133 18:13:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.133 18:13:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.133 18:13:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.133 18:13:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:43.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:43.133 00:14:43.133 --- 10.0.0.2 ping statistics --- 00:14:43.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.133 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:43.133 18:13:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:43.133 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.133 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:43.133 00:14:43.133 --- 10.0.0.3 ping statistics --- 00:14:43.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.133 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:43.133 18:13:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:43.133 00:14:43.133 --- 10.0.0.1 ping statistics --- 00:14:43.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.133 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:43.133 18:13:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.133 18:13:01 -- nvmf/common.sh@421 -- # return 0 00:14:43.133 18:13:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:43.133 18:13:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.133 18:13:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:43.133 18:13:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:43.133 18:13:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.133 18:13:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:43.133 18:13:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:43.133 18:13:01 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:14:43.133 18:13:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:43.133 18:13:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:43.133 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.133 18:13:01 -- nvmf/common.sh@469 -- # nvmfpid=70653 00:14:43.133 18:13:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.133 18:13:01 -- nvmf/common.sh@470 -- # waitforlisten 70653 00:14:43.133 18:13:01 -- common/autotest_common.sh@829 -- # '[' -z 70653 ']' 00:14:43.133 18:13:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.133 18:13:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.133 18:13:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.133 18:13:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.133 18:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:43.133 [2024-11-18 18:13:01.660811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:43.133 [2024-11-18 18:13:01.660916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.392 [2024-11-18 18:13:01.799087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.392 [2024-11-18 18:13:01.851022] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:43.392 [2024-11-18 18:13:01.851166] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.392 [2024-11-18 18:13:01.851178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.392 [2024-11-18 18:13:01.851186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.392 [2024-11-18 18:13:01.851213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.329 18:13:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.329 18:13:02 -- common/autotest_common.sh@862 -- # return 0 00:14:44.329 18:13:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.329 18:13:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 18:13:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.329 18:13:02 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.329 18:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 [2024-11-18 18:13:02.683745] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.329 18:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.329 18:13:02 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:14:44.329 18:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 [2024-11-18 18:13:02.691858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:44.329 18:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.329 18:13:02 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:14:44.329 18:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 null0 00:14:44.329 18:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.329 18:13:02 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:14:44.329 18:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 null1 00:14:44.329 18:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.329 18:13:02 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:14:44.329 18:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 18:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.329 18:13:02 -- host/discovery.sh@45 -- # hostpid=70685 00:14:44.329 18:13:02 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:14:44.329 18:13:02 -- host/discovery.sh@46 -- # waitforlisten 70685 /tmp/host.sock 00:14:44.329 18:13:02 -- common/autotest_common.sh@829 -- # '[' -z 70685 ']' 00:14:44.329 18:13:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:44.329 18:13:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.329 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:44.329 18:13:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:44.329 18:13:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.329 18:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:44.329 [2024-11-18 18:13:02.774024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:44.329 [2024-11-18 18:13:02.774112] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70685 ] 00:14:44.329 [2024-11-18 18:13:02.915944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.588 [2024-11-18 18:13:02.985132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:44.588 [2024-11-18 18:13:02.985303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.526 18:13:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.526 18:13:03 -- common/autotest_common.sh@862 -- # return 0 00:14:45.526 18:13:03 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.526 18:13:03 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@72 -- # notify_id=0 00:14:45.526 18:13:03 -- host/discovery.sh@78 -- # get_subsystem_names 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # xargs 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # sort 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:14:45.526 18:13:03 -- host/discovery.sh@79 -- # get_bdev_list 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # sort 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # xargs 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:14:45.526 18:13:03 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@82 -- # get_subsystem_names 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # sort 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.526 18:13:03 -- host/discovery.sh@59 -- # xargs 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:03 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:14:45.526 18:13:03 -- host/discovery.sh@83 -- # get_bdev_list 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # sort 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.526 18:13:03 -- host/discovery.sh@55 -- # xargs 00:14:45.526 18:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:04 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:14:45.526 18:13:04 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:14:45.526 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:04 -- host/discovery.sh@86 -- # get_subsystem_names 00:14:45.526 18:13:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.526 18:13:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.526 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:04 -- host/discovery.sh@59 -- # xargs 00:14:45.526 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:04 -- host/discovery.sh@59 -- # sort 00:14:45.526 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.526 18:13:04 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:14:45.526 18:13:04 -- host/discovery.sh@87 -- # get_bdev_list 00:14:45.526 18:13:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.526 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.526 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.526 18:13:04 -- host/discovery.sh@55 -- # sort 00:14:45.526 18:13:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.526 18:13:04 -- host/discovery.sh@55 -- # xargs 00:14:45.526 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.785 18:13:04 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:14:45.785 18:13:04 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.785 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.785 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.785 [2024-11-18 18:13:04.164320] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.785 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.785 18:13:04 -- host/discovery.sh@92 -- # get_subsystem_names 00:14:45.785 18:13:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:45.785 18:13:04 -- host/discovery.sh@59 -- # sort 00:14:45.785 18:13:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:45.785 18:13:04 -- host/discovery.sh@59 -- # xargs 00:14:45.785 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.785 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.785 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.785 18:13:04 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:14:45.785 18:13:04 -- host/discovery.sh@93 -- # get_bdev_list 00:14:45.785 18:13:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:45.785 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.785 18:13:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:45.785 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.785 18:13:04 -- host/discovery.sh@55 -- # sort 00:14:45.785 18:13:04 -- host/discovery.sh@55 -- # xargs 00:14:45.785 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.785 18:13:04 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:14:45.785 18:13:04 -- host/discovery.sh@94 -- # get_notification_count 00:14:45.786 18:13:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:45.786 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.786 18:13:04 -- host/discovery.sh@74 -- # jq '. | length' 00:14:45.786 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.786 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.786 18:13:04 -- host/discovery.sh@74 -- # notification_count=0 00:14:45.786 18:13:04 -- host/discovery.sh@75 -- # notify_id=0 00:14:45.786 18:13:04 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:14:45.786 18:13:04 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:14:45.786 18:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.786 18:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:45.786 18:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.786 18:13:04 -- host/discovery.sh@100 -- # sleep 1 00:14:46.353 [2024-11-18 18:13:04.816620] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:46.354 [2024-11-18 18:13:04.816667] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:46.354 [2024-11-18 18:13:04.816710] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:46.354 [2024-11-18 18:13:04.822673] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:46.354 [2024-11-18 18:13:04.878389] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:46.354 [2024-11-18 18:13:04.878437] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:46.922 18:13:05 -- host/discovery.sh@101 -- # get_subsystem_names 00:14:46.922 18:13:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:46.922 18:13:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:46.922 18:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.922 18:13:05 -- host/discovery.sh@59 -- # sort 00:14:46.922 18:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:13:05 -- host/discovery.sh@59 -- # xargs 00:14:46.922 18:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@102 -- # get_bdev_list 00:14:46.922 18:13:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:46.922 18:13:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:46.922 18:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.922 18:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:13:05 -- host/discovery.sh@55 -- # sort 00:14:46.922 18:13:05 -- host/discovery.sh@55 -- # xargs 00:14:46.922 18:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:14:46.922 18:13:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:46.922 18:13:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:46.922 18:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.922 18:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:13:05 -- host/discovery.sh@63 -- # xargs 00:14:46.922 18:13:05 -- host/discovery.sh@63 -- # sort -n 00:14:46.922 18:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:14:46.922 18:13:05 -- host/discovery.sh@104 -- # get_notification_count 00:14:46.922 18:13:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:14:46.922 18:13:05 -- host/discovery.sh@74 -- # jq '. | length' 00:14:46.922 18:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.922 18:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:46.922 18:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.181 18:13:05 -- host/discovery.sh@74 -- # notification_count=1 00:14:47.181 18:13:05 -- host/discovery.sh@75 -- # notify_id=1 00:14:47.181 18:13:05 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:14:47.181 18:13:05 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:14:47.181 18:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.181 18:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:47.181 18:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.181 18:13:05 -- host/discovery.sh@109 -- # sleep 1 00:14:48.119 18:13:06 -- host/discovery.sh@110 -- # get_bdev_list 00:14:48.119 18:13:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:48.119 18:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.119 18:13:06 -- host/discovery.sh@55 -- # sort 00:14:48.119 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:48.119 18:13:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:48.119 18:13:06 -- host/discovery.sh@55 -- # xargs 00:14:48.119 18:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.119 18:13:06 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:48.119 18:13:06 -- host/discovery.sh@111 -- # get_notification_count 00:14:48.119 18:13:06 -- host/discovery.sh@74 -- # jq '. | length' 00:14:48.119 18:13:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:14:48.119 18:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.119 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:48.119 18:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.119 18:13:06 -- host/discovery.sh@74 -- # notification_count=1 00:14:48.119 18:13:06 -- host/discovery.sh@75 -- # notify_id=2 00:14:48.119 18:13:06 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:14:48.119 18:13:06 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:14:48.119 18:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.119 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:48.119 [2024-11-18 18:13:06.679202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:48.119 [2024-11-18 18:13:06.679585] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:48.119 [2024-11-18 18:13:06.679659] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:48.119 18:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.119 18:13:06 -- host/discovery.sh@117 -- # sleep 1 00:14:48.119 [2024-11-18 18:13:06.685528] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:14:48.379 [2024-11-18 18:13:06.749915] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:48.379 [2024-11-18 18:13:06.749963] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:48.379 [2024-11-18 18:13:06.749973] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:49.316 18:13:07 -- host/discovery.sh@118 -- # get_subsystem_names 00:14:49.316 18:13:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:49.316 18:13:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:49.316 18:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.316 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.316 18:13:07 -- host/discovery.sh@59 -- # sort 00:14:49.316 18:13:07 -- host/discovery.sh@59 -- # xargs 00:14:49.316 18:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@119 -- # get_bdev_list 00:14:49.316 18:13:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:49.316 18:13:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:49.316 18:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.316 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.316 18:13:07 -- host/discovery.sh@55 -- # sort 00:14:49.316 18:13:07 -- host/discovery.sh@55 -- # xargs 00:14:49.316 18:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:14:49.316 18:13:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:49.316 18:13:07 -- host/discovery.sh@63 -- # sort -n 00:14:49.316 18:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.316 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.316 18:13:07 -- host/discovery.sh@63 -- # xargs 00:14:49.316 18:13:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:49.316 18:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@121 -- # get_notification_count 00:14:49.316 18:13:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:49.316 18:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.316 18:13:07 -- host/discovery.sh@74 -- # jq '. | length' 00:14:49.316 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.316 18:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@74 -- # notification_count=0 00:14:49.316 18:13:07 -- host/discovery.sh@75 -- # notify_id=2 00:14:49.316 18:13:07 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.316 18:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.316 18:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.316 [2024-11-18 18:13:07.901147] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:14:49.316 [2024-11-18 18:13:07.901194] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:49.316 18:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.316 18:13:07 -- host/discovery.sh@127 -- # sleep 1 00:14:49.316 [2024-11-18 18:13:07.907146] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:14:49.316 [2024-11-18 18:13:07.907191] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:49.316 [2024-11-18 18:13:07.907361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.316 [2024-11-18 18:13:07.907409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.316 [2024-11-18 18:13:07.907433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.316 [2024-11-18 18:13:07.907450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.316 [2024-11-18 18:13:07.907480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.316 [2024-11-18 18:13:07.907495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.316 [2024-11-18 18:13:07.907512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.316 [2024-11-18 18:13:07.907528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.316 [2024-11-18 18:13:07.907572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf55c10 is same with the state(5) to be set 00:14:50.696 18:13:08 -- host/discovery.sh@128 -- # get_subsystem_names 00:14:50.696 18:13:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:50.696 18:13:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:50.696 18:13:08 -- host/discovery.sh@59 -- # sort 00:14:50.696 18:13:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.696 18:13:08 -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 18:13:08 -- host/discovery.sh@59 -- # xargs 00:14:50.696 18:13:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:08 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.696 18:13:08 -- host/discovery.sh@129 -- # get_bdev_list 00:14:50.696 18:13:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:50.696 18:13:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:50.696 18:13:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.696 18:13:08 -- host/discovery.sh@55 -- # xargs 00:14:50.696 18:13:08 -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 18:13:08 -- host/discovery.sh@55 -- # sort 00:14:50.696 18:13:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:14:50.696 18:13:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:14:50.696 18:13:09 -- host/discovery.sh@63 -- # sort -n 00:14:50.696 18:13:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:14:50.696 18:13:09 -- host/discovery.sh@63 -- # xargs 00:14:50.696 18:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.696 18:13:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 18:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@131 -- # get_notification_count 00:14:50.696 18:13:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:50.696 18:13:09 -- host/discovery.sh@74 -- # jq '. | length' 00:14:50.696 18:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.696 18:13:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 18:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@74 -- # notification_count=0 00:14:50.696 18:13:09 -- host/discovery.sh@75 -- # notify_id=2 00:14:50.696 18:13:09 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:14:50.696 18:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.696 18:13:09 -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 18:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.696 18:13:09 -- host/discovery.sh@135 -- # sleep 1 00:14:51.658 18:13:10 -- host/discovery.sh@136 -- # get_subsystem_names 00:14:51.658 18:13:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:14:51.658 18:13:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.658 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:51.658 18:13:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:14:51.658 18:13:10 -- host/discovery.sh@59 -- # sort 00:14:51.658 18:13:10 -- host/discovery.sh@59 -- # xargs 00:14:51.658 18:13:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.658 18:13:10 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:14:51.658 18:13:10 -- host/discovery.sh@137 -- # get_bdev_list 00:14:51.658 18:13:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:51.658 18:13:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.658 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:51.658 18:13:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:51.658 18:13:10 -- host/discovery.sh@55 -- # xargs 00:14:51.658 18:13:10 -- host/discovery.sh@55 -- # sort 00:14:51.658 18:13:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.658 18:13:10 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:14:51.658 18:13:10 -- host/discovery.sh@138 -- # get_notification_count 00:14:51.658 18:13:10 -- host/discovery.sh@74 -- # jq '. | length' 00:14:51.658 18:13:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:14:51.658 18:13:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.658 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:51.917 18:13:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.917 18:13:10 -- host/discovery.sh@74 -- # notification_count=2 00:14:51.917 18:13:10 -- host/discovery.sh@75 -- # notify_id=4 00:14:51.917 18:13:10 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:14:51.917 18:13:10 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:51.917 18:13:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.917 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:52.854 [2024-11-18 18:13:11.319324] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:52.854 [2024-11-18 18:13:11.319356] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:52.854 [2024-11-18 18:13:11.319382] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:52.854 [2024-11-18 18:13:11.325373] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:14:52.854 [2024-11-18 18:13:11.384904] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:52.854 [2024-11-18 18:13:11.384960] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:14:52.854 18:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.854 18:13:11 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:52.854 18:13:11 -- common/autotest_common.sh@650 -- # local es=0 00:14:52.854 18:13:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:52.854 18:13:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:52.854 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.854 18:13:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:52.854 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.854 18:13:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:52.854 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.854 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:52.854 request: 00:14:52.854 { 00:14:52.854 "name": "nvme", 00:14:52.854 "trtype": "tcp", 00:14:52.854 "traddr": "10.0.0.2", 00:14:52.854 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:52.854 "adrfam": "ipv4", 00:14:52.855 "trsvcid": "8009", 00:14:52.855 "wait_for_attach": true, 00:14:52.855 "method": "bdev_nvme_start_discovery", 00:14:52.855 "req_id": 1 00:14:52.855 } 00:14:52.855 Got JSON-RPC error response 00:14:52.855 response: 00:14:52.855 { 00:14:52.855 "code": -17, 00:14:52.855 "message": "File exists" 00:14:52.855 } 00:14:52.855 18:13:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:52.855 18:13:11 -- common/autotest_common.sh@653 -- # es=1 00:14:52.855 18:13:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.855 18:13:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.855 18:13:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.855 18:13:11 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:14:52.855 18:13:11 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:52.855 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.855 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:52.855 18:13:11 -- host/discovery.sh@67 -- # sort 00:14:52.855 18:13:11 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:52.855 18:13:11 -- host/discovery.sh@67 -- # xargs 00:14:52.855 18:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.114 18:13:11 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:14:53.114 18:13:11 -- host/discovery.sh@147 -- # get_bdev_list 00:14:53.114 18:13:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:53.114 18:13:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:53.114 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.114 18:13:11 -- host/discovery.sh@55 -- # sort 00:14:53.114 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.114 18:13:11 -- host/discovery.sh@55 -- # xargs 00:14:53.114 18:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.114 18:13:11 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:53.114 18:13:11 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:53.114 18:13:11 -- common/autotest_common.sh@650 -- # local es=0 00:14:53.114 18:13:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:53.114 18:13:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:53.114 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.114 18:13:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:53.114 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.114 18:13:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:14:53.114 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.114 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.114 request: 00:14:53.114 { 00:14:53.114 "name": "nvme_second", 00:14:53.114 "trtype": "tcp", 00:14:53.114 "traddr": "10.0.0.2", 00:14:53.114 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:53.114 "adrfam": "ipv4", 00:14:53.114 "trsvcid": "8009", 00:14:53.114 "wait_for_attach": true, 00:14:53.114 "method": "bdev_nvme_start_discovery", 00:14:53.114 "req_id": 1 00:14:53.114 } 00:14:53.114 Got JSON-RPC error response 00:14:53.114 response: 00:14:53.114 { 00:14:53.114 "code": -17, 00:14:53.114 "message": "File exists" 00:14:53.114 } 00:14:53.114 18:13:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:53.114 18:13:11 -- common/autotest_common.sh@653 -- # es=1 00:14:53.114 18:13:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.114 18:13:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.114 18:13:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.114 18:13:11 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:14:53.114 18:13:11 -- host/discovery.sh@67 -- # sort 00:14:53.114 18:13:11 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:53.114 18:13:11 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:53.114 18:13:11 -- host/discovery.sh@67 -- # xargs 00:14:53.114 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.114 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.114 18:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.114 18:13:11 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:14:53.115 18:13:11 -- host/discovery.sh@153 -- # get_bdev_list 00:14:53.115 18:13:11 -- host/discovery.sh@55 -- # sort 00:14:53.115 18:13:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:53.115 18:13:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:14:53.115 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.115 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:53.115 18:13:11 -- host/discovery.sh@55 -- # xargs 00:14:53.115 18:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.115 18:13:11 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:14:53.115 18:13:11 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:53.115 18:13:11 -- common/autotest_common.sh@650 -- # local es=0 00:14:53.115 18:13:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:53.115 18:13:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:53.115 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.115 18:13:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:53.115 18:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.115 18:13:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:14:53.115 18:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.115 18:13:11 -- common/autotest_common.sh@10 -- # set +x 00:14:54.053 [2024-11-18 18:13:12.646840] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:54.053 [2024-11-18 18:13:12.646987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:54.053 [2024-11-18 18:13:12.647066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:54.053 [2024-11-18 18:13:12.647092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa7270 with addr=10.0.0.2, port=8010 00:14:54.053 [2024-11-18 18:13:12.647134] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:54.053 [2024-11-18 18:13:12.647148] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:54.053 [2024-11-18 18:13:12.647161] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:55.432 [2024-11-18 18:13:13.646820] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:14:55.432 [2024-11-18 18:13:13.646957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:55.432 [2024-11-18 18:13:13.647017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:55.432 [2024-11-18 18:13:13.647041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa7270 with addr=10.0.0.2, port=8010 00:14:55.432 [2024-11-18 18:13:13.647066] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:55.432 [2024-11-18 18:13:13.647079] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:55.432 [2024-11-18 18:13:13.647091] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:14:56.371 [2024-11-18 18:13:14.646670] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:14:56.371 request: 00:14:56.371 { 00:14:56.371 "name": "nvme_second", 00:14:56.371 "trtype": "tcp", 00:14:56.371 "traddr": "10.0.0.2", 00:14:56.371 "hostnqn": "nqn.2021-12.io.spdk:test", 00:14:56.371 "adrfam": "ipv4", 00:14:56.371 "trsvcid": "8010", 00:14:56.371 "attach_timeout_ms": 3000, 00:14:56.371 "method": "bdev_nvme_start_discovery", 00:14:56.371 "req_id": 1 00:14:56.371 } 00:14:56.371 Got JSON-RPC error response 00:14:56.371 response: 00:14:56.371 { 00:14:56.371 "code": -110, 00:14:56.371 "message": "Connection timed out" 00:14:56.371 } 00:14:56.371 18:13:14 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:56.371 18:13:14 -- common/autotest_common.sh@653 -- # es=1 00:14:56.371 18:13:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.371 18:13:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.371 18:13:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.371 18:13:14 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:14:56.371 18:13:14 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:14:56.371 18:13:14 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:14:56.371 18:13:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.371 18:13:14 -- common/autotest_common.sh@10 -- # set +x 00:14:56.371 18:13:14 -- host/discovery.sh@67 -- # sort 00:14:56.371 18:13:14 -- host/discovery.sh@67 -- # xargs 00:14:56.371 18:13:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.371 18:13:14 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:14:56.371 18:13:14 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:14:56.371 18:13:14 -- host/discovery.sh@162 -- # kill 70685 00:14:56.371 18:13:14 -- host/discovery.sh@163 -- # nvmftestfini 00:14:56.371 18:13:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:56.371 18:13:14 -- nvmf/common.sh@116 -- # sync 00:14:56.371 18:13:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:56.371 18:13:14 -- nvmf/common.sh@119 -- # set +e 00:14:56.371 18:13:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:56.371 18:13:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:56.371 rmmod nvme_tcp 00:14:56.371 rmmod nvme_fabrics 00:14:56.371 rmmod nvme_keyring 00:14:56.371 18:13:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:56.371 18:13:14 -- nvmf/common.sh@123 -- # set -e 00:14:56.371 18:13:14 -- nvmf/common.sh@124 -- # return 0 00:14:56.371 18:13:14 -- nvmf/common.sh@477 -- # '[' -n 70653 ']' 00:14:56.371 18:13:14 -- nvmf/common.sh@478 -- # killprocess 70653 00:14:56.371 18:13:14 -- common/autotest_common.sh@936 -- # '[' -z 70653 ']' 00:14:56.371 18:13:14 -- common/autotest_common.sh@940 -- # kill -0 70653 00:14:56.371 18:13:14 -- common/autotest_common.sh@941 -- # uname 00:14:56.371 18:13:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.371 18:13:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70653 00:14:56.371 18:13:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:56.371 18:13:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:56.371 killing process with pid 70653 00:14:56.371 18:13:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70653' 00:14:56.371 18:13:14 -- common/autotest_common.sh@955 -- # kill 70653 00:14:56.371 18:13:14 -- common/autotest_common.sh@960 -- # wait 70653 00:14:56.630 18:13:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.630 18:13:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.630 18:13:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.630 18:13:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.630 18:13:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.630 18:13:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.630 18:13:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.630 18:13:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.630 18:13:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:56.630 00:14:56.630 real 0m13.986s 00:14:56.630 user 0m26.955s 00:14:56.630 sys 0m2.122s 00:14:56.630 18:13:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:56.630 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:56.630 ************************************ 00:14:56.630 END TEST nvmf_discovery 00:14:56.630 ************************************ 00:14:56.630 18:13:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:56.630 18:13:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.630 18:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.630 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:56.630 ************************************ 00:14:56.630 START TEST nvmf_discovery_remove_ifc 00:14:56.630 ************************************ 00:14:56.630 18:13:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:14:56.630 * Looking for test storage... 00:14:56.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:56.630 18:13:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:56.630 18:13:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:56.630 18:13:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:56.890 18:13:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:56.890 18:13:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:56.890 18:13:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:56.890 18:13:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:56.890 18:13:15 -- scripts/common.sh@335 -- # IFS=.-: 00:14:56.890 18:13:15 -- scripts/common.sh@335 -- # read -ra ver1 00:14:56.890 18:13:15 -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.890 18:13:15 -- scripts/common.sh@336 -- # read -ra ver2 00:14:56.890 18:13:15 -- scripts/common.sh@337 -- # local 'op=<' 00:14:56.890 18:13:15 -- scripts/common.sh@339 -- # ver1_l=2 00:14:56.890 18:13:15 -- scripts/common.sh@340 -- # ver2_l=1 00:14:56.890 18:13:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:56.890 18:13:15 -- scripts/common.sh@343 -- # case "$op" in 00:14:56.890 18:13:15 -- scripts/common.sh@344 -- # : 1 00:14:56.890 18:13:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:56.890 18:13:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.890 18:13:15 -- scripts/common.sh@364 -- # decimal 1 00:14:56.890 18:13:15 -- scripts/common.sh@352 -- # local d=1 00:14:56.890 18:13:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.890 18:13:15 -- scripts/common.sh@354 -- # echo 1 00:14:56.890 18:13:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:56.890 18:13:15 -- scripts/common.sh@365 -- # decimal 2 00:14:56.890 18:13:15 -- scripts/common.sh@352 -- # local d=2 00:14:56.890 18:13:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.890 18:13:15 -- scripts/common.sh@354 -- # echo 2 00:14:56.890 18:13:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:56.890 18:13:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:56.890 18:13:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:56.890 18:13:15 -- scripts/common.sh@367 -- # return 0 00:14:56.890 18:13:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.890 18:13:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.890 --rc genhtml_branch_coverage=1 00:14:56.890 --rc genhtml_function_coverage=1 00:14:56.890 --rc genhtml_legend=1 00:14:56.890 --rc geninfo_all_blocks=1 00:14:56.890 --rc geninfo_unexecuted_blocks=1 00:14:56.890 00:14:56.890 ' 00:14:56.890 18:13:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.890 --rc genhtml_branch_coverage=1 00:14:56.890 --rc genhtml_function_coverage=1 00:14:56.890 --rc genhtml_legend=1 00:14:56.890 --rc geninfo_all_blocks=1 00:14:56.890 --rc geninfo_unexecuted_blocks=1 00:14:56.890 00:14:56.890 ' 00:14:56.890 18:13:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.890 --rc genhtml_branch_coverage=1 00:14:56.890 --rc genhtml_function_coverage=1 00:14:56.890 --rc genhtml_legend=1 00:14:56.890 --rc geninfo_all_blocks=1 00:14:56.890 --rc geninfo_unexecuted_blocks=1 00:14:56.890 00:14:56.890 ' 00:14:56.890 18:13:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:56.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.890 --rc genhtml_branch_coverage=1 00:14:56.890 --rc genhtml_function_coverage=1 00:14:56.890 --rc genhtml_legend=1 00:14:56.890 --rc geninfo_all_blocks=1 00:14:56.890 --rc geninfo_unexecuted_blocks=1 00:14:56.890 00:14:56.890 ' 00:14:56.890 18:13:15 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.891 18:13:15 -- nvmf/common.sh@7 -- # uname -s 00:14:56.891 18:13:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.891 18:13:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.891 18:13:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.891 18:13:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.891 18:13:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.891 18:13:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.891 18:13:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.891 18:13:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.891 18:13:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.891 18:13:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:56.891 18:13:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:14:56.891 18:13:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.891 18:13:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.891 18:13:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.891 18:13:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.891 18:13:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.891 18:13:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.891 18:13:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.891 18:13:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.891 18:13:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.891 18:13:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.891 18:13:15 -- paths/export.sh@5 -- # export PATH 00:14:56.891 18:13:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.891 18:13:15 -- nvmf/common.sh@46 -- # : 0 00:14:56.891 18:13:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.891 18:13:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.891 18:13:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.891 18:13:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.891 18:13:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.891 18:13:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.891 18:13:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.891 18:13:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:14:56.891 18:13:15 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:14:56.891 18:13:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:56.891 18:13:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.891 18:13:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:56.891 18:13:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:56.891 18:13:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:56.891 18:13:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.891 18:13:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.891 18:13:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.891 18:13:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:56.891 18:13:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:56.891 18:13:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.891 18:13:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.891 18:13:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.891 18:13:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:56.891 18:13:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.891 18:13:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.891 18:13:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.891 18:13:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.891 18:13:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.891 18:13:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.891 18:13:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.891 18:13:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.891 18:13:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:56.891 18:13:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:56.891 Cannot find device "nvmf_tgt_br" 00:14:56.891 18:13:15 -- nvmf/common.sh@154 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.891 Cannot find device "nvmf_tgt_br2" 00:14:56.891 18:13:15 -- nvmf/common.sh@155 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:56.891 18:13:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:56.891 Cannot find device "nvmf_tgt_br" 00:14:56.891 18:13:15 -- nvmf/common.sh@157 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:56.891 Cannot find device "nvmf_tgt_br2" 00:14:56.891 18:13:15 -- nvmf/common.sh@158 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:56.891 18:13:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:56.891 18:13:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.891 18:13:15 -- nvmf/common.sh@161 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.891 18:13:15 -- nvmf/common.sh@162 -- # true 00:14:56.891 18:13:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.891 18:13:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.891 18:13:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.891 18:13:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.151 18:13:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.151 18:13:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.151 18:13:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.151 18:13:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.151 18:13:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.151 18:13:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:57.151 18:13:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:57.151 18:13:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:57.151 18:13:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:57.151 18:13:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.151 18:13:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.151 18:13:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.151 18:13:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:57.151 18:13:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:57.151 18:13:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.151 18:13:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.151 18:13:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.151 18:13:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.151 18:13:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.151 18:13:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:57.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:57.151 00:14:57.151 --- 10.0.0.2 ping statistics --- 00:14:57.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.151 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:57.151 18:13:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:57.151 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.151 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:57.151 00:14:57.151 --- 10.0.0.3 ping statistics --- 00:14:57.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.151 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:57.151 18:13:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:57.151 00:14:57.151 --- 10.0.0.1 ping statistics --- 00:14:57.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.151 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:57.151 18:13:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.151 18:13:15 -- nvmf/common.sh@421 -- # return 0 00:14:57.151 18:13:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.151 18:13:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.151 18:13:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.151 18:13:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.151 18:13:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.151 18:13:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.151 18:13:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.151 18:13:15 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:14:57.151 18:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.151 18:13:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.151 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.151 18:13:15 -- nvmf/common.sh@469 -- # nvmfpid=71190 00:14:57.151 18:13:15 -- nvmf/common.sh@470 -- # waitforlisten 71190 00:14:57.151 18:13:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.151 18:13:15 -- common/autotest_common.sh@829 -- # '[' -z 71190 ']' 00:14:57.151 18:13:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.151 18:13:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.151 18:13:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.151 18:13:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.151 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.410 [2024-11-18 18:13:15.765731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:57.410 [2024-11-18 18:13:15.765828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.410 [2024-11-18 18:13:15.903786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.410 [2024-11-18 18:13:15.954938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.410 [2024-11-18 18:13:15.955218] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.410 [2024-11-18 18:13:15.955238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.410 [2024-11-18 18:13:15.955247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.410 [2024-11-18 18:13:15.955275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.349 18:13:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.349 18:13:16 -- common/autotest_common.sh@862 -- # return 0 00:14:58.349 18:13:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.349 18:13:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.349 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.349 18:13:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.349 18:13:16 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:14:58.349 18:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.349 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.349 [2024-11-18 18:13:16.750921] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.349 [2024-11-18 18:13:16.759042] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:58.349 null0 00:14:58.349 [2024-11-18 18:13:16.790943] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.349 18:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.349 18:13:16 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71219 00:14:58.349 18:13:16 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:14:58.349 18:13:16 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71219 /tmp/host.sock 00:14:58.349 18:13:16 -- common/autotest_common.sh@829 -- # '[' -z 71219 ']' 00:14:58.349 18:13:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:14:58.349 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:14:58.349 18:13:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.349 18:13:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:14:58.349 18:13:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.349 18:13:16 -- common/autotest_common.sh@10 -- # set +x 00:14:58.349 [2024-11-18 18:13:16.861063] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.349 [2024-11-18 18:13:16.861149] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71219 ] 00:14:58.608 [2024-11-18 18:13:16.998101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.608 [2024-11-18 18:13:17.067922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:58.608 [2024-11-18 18:13:17.068104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.608 18:13:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.608 18:13:17 -- common/autotest_common.sh@862 -- # return 0 00:14:58.608 18:13:17 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.608 18:13:17 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:14:58.608 18:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.608 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.608 18:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.608 18:13:17 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:14:58.608 18:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.608 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:58.608 18:13:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.608 18:13:17 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:14:58.608 18:13:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.608 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.984 [2024-11-18 18:13:18.184205] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:14:59.984 [2024-11-18 18:13:18.184255] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:14:59.984 [2024-11-18 18:13:18.184274] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:14:59.984 [2024-11-18 18:13:18.190267] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:14:59.984 [2024-11-18 18:13:18.246232] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:14:59.984 [2024-11-18 18:13:18.246314] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:14:59.984 [2024-11-18 18:13:18.246358] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:14:59.984 [2024-11-18 18:13:18.246375] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:14:59.984 [2024-11-18 18:13:18.246402] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:14:59.984 18:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.984 18:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.984 18:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:59.984 [2024-11-18 18:13:18.252601] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x943be0 was disconnected and freed. delete nvme_qpair. 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:59.984 18:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:14:59.984 18:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:14:59.984 18:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:14:59.984 18:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:14:59.984 18:13:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:00.920 18:13:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:00.920 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:00.920 18:13:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:00.920 18:13:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:01.854 18:13:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:01.854 18:13:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:01.854 18:13:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:01.854 18:13:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:01.854 18:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.854 18:13:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:01.854 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:15:02.112 18:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.112 18:13:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:02.112 18:13:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:03.048 18:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.048 18:13:21 -- common/autotest_common.sh@10 -- # set +x 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:03.048 18:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:03.048 18:13:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:04.028 18:13:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:04.028 18:13:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.028 18:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.028 18:13:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:04.028 18:13:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.028 18:13:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:04.028 18:13:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:04.028 18:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.287 18:13:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:04.287 18:13:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.224 18:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.224 18:13:23 -- common/autotest_common.sh@10 -- # set +x 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:05.224 18:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.224 [2024-11-18 18:13:23.674739] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:05.224 [2024-11-18 18:13:23.675006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.224 [2024-11-18 18:13:23.675179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.224 [2024-11-18 18:13:23.675333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.224 [2024-11-18 18:13:23.675347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.224 [2024-11-18 18:13:23.675357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.224 [2024-11-18 18:13:23.675365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.224 [2024-11-18 18:13:23.675375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.224 [2024-11-18 18:13:23.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.224 [2024-11-18 18:13:23.675393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.224 [2024-11-18 18:13:23.675401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.224 [2024-11-18 18:13:23.675410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b8de0 is same with the state(5) to be set 00:15:05.224 [2024-11-18 18:13:23.684737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b8de0 (9): Bad file descriptor 00:15:05.224 [2024-11-18 18:13:23.694776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:05.224 18:13:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:06.161 18:13:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:06.161 18:13:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.161 18:13:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:06.161 18:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.161 18:13:24 -- common/autotest_common.sh@10 -- # set +x 00:15:06.161 18:13:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:06.161 18:13:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:06.161 [2024-11-18 18:13:24.731692] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:07.539 [2024-11-18 18:13:25.754661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:08.476 [2024-11-18 18:13:26.778657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:08.477 [2024-11-18 18:13:26.778791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b8de0 with addr=10.0.0.2, port=4420 00:15:08.477 [2024-11-18 18:13:26.778827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b8de0 is same with the state(5) to be set 00:15:08.477 [2024-11-18 18:13:26.778881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:08.477 [2024-11-18 18:13:26.778905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:08.477 [2024-11-18 18:13:26.778924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:08.477 [2024-11-18 18:13:26.778944] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:08.477 [2024-11-18 18:13:26.779778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b8de0 (9): Bad file descriptor 00:15:08.477 [2024-11-18 18:13:26.779842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:08.477 [2024-11-18 18:13:26.779897] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:08.477 [2024-11-18 18:13:26.779977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.477 [2024-11-18 18:13:26.780015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.477 [2024-11-18 18:13:26.780041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.477 [2024-11-18 18:13:26.780063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.477 [2024-11-18 18:13:26.780084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.477 [2024-11-18 18:13:26.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.477 [2024-11-18 18:13:26.780125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.477 [2024-11-18 18:13:26.780145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.477 [2024-11-18 18:13:26.780167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.477 [2024-11-18 18:13:26.780186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.477 [2024-11-18 18:13:26.780206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:08.477 [2024-11-18 18:13:26.780268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b91f0 (9): Bad file descriptor 00:15:08.477 [2024-11-18 18:13:26.781268] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:08.477 [2024-11-18 18:13:26.781301] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:08.477 18:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.477 18:13:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:08.477 18:13:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.414 18:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.414 18:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.414 18:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:09.414 18:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.414 18:13:27 -- common/autotest_common.sh@10 -- # set +x 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:09.414 18:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:09.414 18:13:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:10.351 [2024-11-18 18:13:28.789210] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:10.351 [2024-11-18 18:13:28.789235] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:10.351 [2024-11-18 18:13:28.789253] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:10.351 [2024-11-18 18:13:28.795265] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:10.351 [2024-11-18 18:13:28.850735] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:10.351 [2024-11-18 18:13:28.850921] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:10.351 [2024-11-18 18:13:28.851034] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:10.351 [2024-11-18 18:13:28.851168] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:10.351 [2024-11-18 18:13:28.851232] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:10.351 [2024-11-18 18:13:28.857820] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8face0 was disconnected and freed. delete nvme_qpair. 00:15:10.351 18:13:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:10.351 18:13:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:10.351 18:13:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:10.351 18:13:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:10.351 18:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.351 18:13:28 -- common/autotest_common.sh@10 -- # set +x 00:15:10.351 18:13:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:10.610 18:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.611 18:13:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:10.611 18:13:29 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:10.611 18:13:29 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71219 00:15:10.611 18:13:29 -- common/autotest_common.sh@936 -- # '[' -z 71219 ']' 00:15:10.611 18:13:29 -- common/autotest_common.sh@940 -- # kill -0 71219 00:15:10.611 18:13:29 -- common/autotest_common.sh@941 -- # uname 00:15:10.611 18:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.611 18:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71219 00:15:10.611 killing process with pid 71219 00:15:10.611 18:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.611 18:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.611 18:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71219' 00:15:10.611 18:13:29 -- common/autotest_common.sh@955 -- # kill 71219 00:15:10.611 18:13:29 -- common/autotest_common.sh@960 -- # wait 71219 00:15:10.870 18:13:29 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:10.870 18:13:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:10.870 18:13:29 -- nvmf/common.sh@116 -- # sync 00:15:10.870 18:13:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:10.870 18:13:29 -- nvmf/common.sh@119 -- # set +e 00:15:10.870 18:13:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:10.870 18:13:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:10.870 rmmod nvme_tcp 00:15:10.870 rmmod nvme_fabrics 00:15:10.870 rmmod nvme_keyring 00:15:10.870 18:13:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:10.870 18:13:29 -- nvmf/common.sh@123 -- # set -e 00:15:10.870 18:13:29 -- nvmf/common.sh@124 -- # return 0 00:15:10.870 18:13:29 -- nvmf/common.sh@477 -- # '[' -n 71190 ']' 00:15:10.870 18:13:29 -- nvmf/common.sh@478 -- # killprocess 71190 00:15:10.870 18:13:29 -- common/autotest_common.sh@936 -- # '[' -z 71190 ']' 00:15:10.870 18:13:29 -- common/autotest_common.sh@940 -- # kill -0 71190 00:15:10.870 18:13:29 -- common/autotest_common.sh@941 -- # uname 00:15:10.870 18:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.870 18:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71190 00:15:10.870 killing process with pid 71190 00:15:10.870 18:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.870 18:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.870 18:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71190' 00:15:10.870 18:13:29 -- common/autotest_common.sh@955 -- # kill 71190 00:15:10.870 18:13:29 -- common/autotest_common.sh@960 -- # wait 71190 00:15:11.128 18:13:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:11.128 18:13:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:11.128 18:13:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:11.128 18:13:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.128 18:13:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:11.128 18:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.128 18:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.128 18:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.128 18:13:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:11.128 00:15:11.128 real 0m14.483s 00:15:11.128 user 0m22.697s 00:15:11.128 sys 0m2.444s 00:15:11.128 18:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:11.128 ************************************ 00:15:11.128 END TEST nvmf_discovery_remove_ifc 00:15:11.128 ************************************ 00:15:11.128 18:13:29 -- common/autotest_common.sh@10 -- # set +x 00:15:11.128 18:13:29 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:11.128 18:13:29 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:11.128 18:13:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.128 18:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.128 18:13:29 -- common/autotest_common.sh@10 -- # set +x 00:15:11.128 ************************************ 00:15:11.128 START TEST nvmf_digest 00:15:11.128 ************************************ 00:15:11.128 18:13:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:11.128 * Looking for test storage... 00:15:11.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:11.128 18:13:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:11.128 18:13:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:11.128 18:13:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:11.388 18:13:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:11.388 18:13:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:11.388 18:13:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:11.388 18:13:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:11.388 18:13:29 -- scripts/common.sh@335 -- # IFS=.-: 00:15:11.388 18:13:29 -- scripts/common.sh@335 -- # read -ra ver1 00:15:11.388 18:13:29 -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.388 18:13:29 -- scripts/common.sh@336 -- # read -ra ver2 00:15:11.388 18:13:29 -- scripts/common.sh@337 -- # local 'op=<' 00:15:11.388 18:13:29 -- scripts/common.sh@339 -- # ver1_l=2 00:15:11.388 18:13:29 -- scripts/common.sh@340 -- # ver2_l=1 00:15:11.388 18:13:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:11.388 18:13:29 -- scripts/common.sh@343 -- # case "$op" in 00:15:11.388 18:13:29 -- scripts/common.sh@344 -- # : 1 00:15:11.388 18:13:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:11.388 18:13:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.388 18:13:29 -- scripts/common.sh@364 -- # decimal 1 00:15:11.388 18:13:29 -- scripts/common.sh@352 -- # local d=1 00:15:11.388 18:13:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.388 18:13:29 -- scripts/common.sh@354 -- # echo 1 00:15:11.388 18:13:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:11.388 18:13:29 -- scripts/common.sh@365 -- # decimal 2 00:15:11.388 18:13:29 -- scripts/common.sh@352 -- # local d=2 00:15:11.388 18:13:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.388 18:13:29 -- scripts/common.sh@354 -- # echo 2 00:15:11.388 18:13:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:11.388 18:13:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:11.388 18:13:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:11.388 18:13:29 -- scripts/common.sh@367 -- # return 0 00:15:11.388 18:13:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.388 18:13:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:11.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.388 --rc genhtml_branch_coverage=1 00:15:11.388 --rc genhtml_function_coverage=1 00:15:11.388 --rc genhtml_legend=1 00:15:11.388 --rc geninfo_all_blocks=1 00:15:11.388 --rc geninfo_unexecuted_blocks=1 00:15:11.388 00:15:11.388 ' 00:15:11.388 18:13:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:11.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.388 --rc genhtml_branch_coverage=1 00:15:11.388 --rc genhtml_function_coverage=1 00:15:11.388 --rc genhtml_legend=1 00:15:11.388 --rc geninfo_all_blocks=1 00:15:11.388 --rc geninfo_unexecuted_blocks=1 00:15:11.388 00:15:11.388 ' 00:15:11.388 18:13:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:11.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.388 --rc genhtml_branch_coverage=1 00:15:11.388 --rc genhtml_function_coverage=1 00:15:11.389 --rc genhtml_legend=1 00:15:11.389 --rc geninfo_all_blocks=1 00:15:11.389 --rc geninfo_unexecuted_blocks=1 00:15:11.389 00:15:11.389 ' 00:15:11.389 18:13:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:11.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.389 --rc genhtml_branch_coverage=1 00:15:11.389 --rc genhtml_function_coverage=1 00:15:11.389 --rc genhtml_legend=1 00:15:11.389 --rc geninfo_all_blocks=1 00:15:11.389 --rc geninfo_unexecuted_blocks=1 00:15:11.389 00:15:11.389 ' 00:15:11.389 18:13:29 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.389 18:13:29 -- nvmf/common.sh@7 -- # uname -s 00:15:11.389 18:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.389 18:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.389 18:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.389 18:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.389 18:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.389 18:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.389 18:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.389 18:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.389 18:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.389 18:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:15:11.389 18:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:15:11.389 18:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.389 18:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.389 18:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.389 18:13:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.389 18:13:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.389 18:13:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.389 18:13:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.389 18:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.389 18:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.389 18:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.389 18:13:29 -- paths/export.sh@5 -- # export PATH 00:15:11.389 18:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.389 18:13:29 -- nvmf/common.sh@46 -- # : 0 00:15:11.389 18:13:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:11.389 18:13:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:11.389 18:13:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:11.389 18:13:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.389 18:13:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.389 18:13:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:11.389 18:13:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:11.389 18:13:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:11.389 18:13:29 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:11.389 18:13:29 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:11.389 18:13:29 -- host/digest.sh@16 -- # runtime=2 00:15:11.389 18:13:29 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:11.389 18:13:29 -- host/digest.sh@132 -- # nvmftestinit 00:15:11.389 18:13:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:11.389 18:13:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.389 18:13:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:11.389 18:13:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:11.389 18:13:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:11.389 18:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.389 18:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.389 18:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.389 18:13:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:11.389 18:13:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:11.389 18:13:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.389 18:13:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.389 18:13:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.389 18:13:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:11.389 18:13:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.389 18:13:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.389 18:13:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.389 18:13:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.389 18:13:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.389 18:13:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.389 18:13:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.389 18:13:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.389 18:13:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:11.389 18:13:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:11.389 Cannot find device "nvmf_tgt_br" 00:15:11.389 18:13:29 -- nvmf/common.sh@154 -- # true 00:15:11.389 18:13:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.389 Cannot find device "nvmf_tgt_br2" 00:15:11.389 18:13:29 -- nvmf/common.sh@155 -- # true 00:15:11.389 18:13:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:11.389 18:13:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:11.389 Cannot find device "nvmf_tgt_br" 00:15:11.389 18:13:29 -- nvmf/common.sh@157 -- # true 00:15:11.389 18:13:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:11.389 Cannot find device "nvmf_tgt_br2" 00:15:11.389 18:13:29 -- nvmf/common.sh@158 -- # true 00:15:11.389 18:13:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:11.389 18:13:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:11.389 18:13:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.389 18:13:29 -- nvmf/common.sh@161 -- # true 00:15:11.389 18:13:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.649 18:13:29 -- nvmf/common.sh@162 -- # true 00:15:11.649 18:13:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.649 18:13:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.649 18:13:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.649 18:13:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.649 18:13:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.649 18:13:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.649 18:13:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.649 18:13:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.649 18:13:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.649 18:13:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:11.649 18:13:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:11.649 18:13:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:11.649 18:13:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:11.649 18:13:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.649 18:13:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.649 18:13:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.649 18:13:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:11.649 18:13:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:11.649 18:13:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.649 18:13:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.649 18:13:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.649 18:13:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.649 18:13:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.649 18:13:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:11.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:15:11.649 00:15:11.649 --- 10.0.0.2 ping statistics --- 00:15:11.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.649 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:11.649 18:13:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:11.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:11.649 00:15:11.649 --- 10.0.0.3 ping statistics --- 00:15:11.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.649 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:11.649 18:13:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:11.649 00:15:11.649 --- 10.0.0.1 ping statistics --- 00:15:11.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.649 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:11.649 18:13:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.649 18:13:30 -- nvmf/common.sh@421 -- # return 0 00:15:11.649 18:13:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:11.649 18:13:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.649 18:13:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:11.649 18:13:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:11.649 18:13:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.649 18:13:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:11.649 18:13:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:11.649 18:13:30 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:11.649 18:13:30 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:15:11.649 18:13:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:11.649 18:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.649 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:11.649 ************************************ 00:15:11.649 START TEST nvmf_digest_clean 00:15:11.649 ************************************ 00:15:11.649 18:13:30 -- common/autotest_common.sh@1114 -- # run_digest 00:15:11.649 18:13:30 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:15:11.649 18:13:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:11.649 18:13:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.649 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:11.649 18:13:30 -- nvmf/common.sh@469 -- # nvmfpid=71636 00:15:11.649 18:13:30 -- nvmf/common.sh@470 -- # waitforlisten 71636 00:15:11.649 18:13:30 -- common/autotest_common.sh@829 -- # '[' -z 71636 ']' 00:15:11.649 18:13:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:11.649 18:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.649 18:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.649 18:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.649 18:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.649 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:11.909 [2024-11-18 18:13:30.265995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:11.909 [2024-11-18 18:13:30.266087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.909 [2024-11-18 18:13:30.407118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.909 [2024-11-18 18:13:30.457763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:11.909 [2024-11-18 18:13:30.457929] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.909 [2024-11-18 18:13:30.457956] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.909 [2024-11-18 18:13:30.457963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.909 [2024-11-18 18:13:30.457993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.909 18:13:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.909 18:13:30 -- common/autotest_common.sh@862 -- # return 0 00:15:11.909 18:13:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.909 18:13:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.909 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.168 18:13:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.168 18:13:30 -- host/digest.sh@120 -- # common_target_config 00:15:12.168 18:13:30 -- host/digest.sh@43 -- # rpc_cmd 00:15:12.168 18:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.168 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.168 null0 00:15:12.168 [2024-11-18 18:13:30.607390] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.168 [2024-11-18 18:13:30.631493] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.169 18:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.169 18:13:30 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:15:12.169 18:13:30 -- host/digest.sh@77 -- # local rw bs qd 00:15:12.169 18:13:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:12.169 18:13:30 -- host/digest.sh@80 -- # rw=randread 00:15:12.169 18:13:30 -- host/digest.sh@80 -- # bs=4096 00:15:12.169 18:13:30 -- host/digest.sh@80 -- # qd=128 00:15:12.169 18:13:30 -- host/digest.sh@82 -- # bperfpid=71659 00:15:12.169 18:13:30 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:12.169 18:13:30 -- host/digest.sh@83 -- # waitforlisten 71659 /var/tmp/bperf.sock 00:15:12.169 18:13:30 -- common/autotest_common.sh@829 -- # '[' -z 71659 ']' 00:15:12.169 18:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:12.169 18:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:12.169 18:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:12.169 18:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.169 18:13:30 -- common/autotest_common.sh@10 -- # set +x 00:15:12.169 [2024-11-18 18:13:30.690930] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:12.169 [2024-11-18 18:13:30.691045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71659 ] 00:15:12.428 [2024-11-18 18:13:30.830098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.428 [2024-11-18 18:13:30.900603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.364 18:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.364 18:13:31 -- common/autotest_common.sh@862 -- # return 0 00:15:13.364 18:13:31 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:13.364 18:13:31 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:13.364 18:13:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:13.364 18:13:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:13.364 18:13:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:13.623 nvme0n1 00:15:13.623 18:13:32 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:13.623 18:13:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:13.882 Running I/O for 2 seconds... 00:15:15.787 00:15:15.787 Latency(us) 00:15:15.787 [2024-11-18T18:13:34.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.787 [2024-11-18T18:13:34.391Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:15.787 nvme0n1 : 2.00 16354.57 63.89 0.00 0.00 7821.43 6911.07 21328.99 00:15:15.787 [2024-11-18T18:13:34.391Z] =================================================================================================================== 00:15:15.787 [2024-11-18T18:13:34.391Z] Total : 16354.57 63.89 0.00 0.00 7821.43 6911.07 21328.99 00:15:15.787 0 00:15:15.787 18:13:34 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:15.787 18:13:34 -- host/digest.sh@92 -- # get_accel_stats 00:15:15.787 18:13:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:15.787 18:13:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:15.787 | select(.opcode=="crc32c") 00:15:15.787 | "\(.module_name) \(.executed)"' 00:15:15.787 18:13:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:16.047 18:13:34 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:16.047 18:13:34 -- host/digest.sh@93 -- # exp_module=software 00:15:16.047 18:13:34 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:16.047 18:13:34 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:16.047 18:13:34 -- host/digest.sh@97 -- # killprocess 71659 00:15:16.047 18:13:34 -- common/autotest_common.sh@936 -- # '[' -z 71659 ']' 00:15:16.047 18:13:34 -- common/autotest_common.sh@940 -- # kill -0 71659 00:15:16.047 18:13:34 -- common/autotest_common.sh@941 -- # uname 00:15:16.047 18:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.047 18:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71659 00:15:16.047 18:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:16.047 18:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:16.047 killing process with pid 71659 00:15:16.047 18:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71659' 00:15:16.047 Received shutdown signal, test time was about 2.000000 seconds 00:15:16.047 00:15:16.047 Latency(us) 00:15:16.047 [2024-11-18T18:13:34.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.047 [2024-11-18T18:13:34.651Z] =================================================================================================================== 00:15:16.047 [2024-11-18T18:13:34.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.047 18:13:34 -- common/autotest_common.sh@955 -- # kill 71659 00:15:16.047 18:13:34 -- common/autotest_common.sh@960 -- # wait 71659 00:15:16.336 18:13:34 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:15:16.336 18:13:34 -- host/digest.sh@77 -- # local rw bs qd 00:15:16.336 18:13:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:16.336 18:13:34 -- host/digest.sh@80 -- # rw=randread 00:15:16.336 18:13:34 -- host/digest.sh@80 -- # bs=131072 00:15:16.336 18:13:34 -- host/digest.sh@80 -- # qd=16 00:15:16.336 18:13:34 -- host/digest.sh@82 -- # bperfpid=71715 00:15:16.336 18:13:34 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:16.337 18:13:34 -- host/digest.sh@83 -- # waitforlisten 71715 /var/tmp/bperf.sock 00:15:16.337 18:13:34 -- common/autotest_common.sh@829 -- # '[' -z 71715 ']' 00:15:16.337 18:13:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:16.337 18:13:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:16.337 18:13:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:16.337 18:13:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.337 18:13:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:16.337 Zero copy mechanism will not be used. 00:15:16.337 [2024-11-18 18:13:34.808771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.337 [2024-11-18 18:13:34.808893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:15:16.607 [2024-11-18 18:13:34.948605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.607 [2024-11-18 18:13:35.003905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.607 18:13:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.607 18:13:35 -- common/autotest_common.sh@862 -- # return 0 00:15:16.607 18:13:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:16.607 18:13:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:16.607 18:13:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:16.866 18:13:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:16.866 18:13:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:17.124 nvme0n1 00:15:17.124 18:13:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:17.124 18:13:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:17.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:17.383 Zero copy mechanism will not be used. 00:15:17.383 Running I/O for 2 seconds... 00:15:19.288 00:15:19.288 Latency(us) 00:15:19.288 [2024-11-18T18:13:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.288 [2024-11-18T18:13:37.892Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:19.288 nvme0n1 : 2.00 8125.46 1015.68 0.00 0.00 1966.29 1653.29 4527.94 00:15:19.288 [2024-11-18T18:13:37.892Z] =================================================================================================================== 00:15:19.288 [2024-11-18T18:13:37.892Z] Total : 8125.46 1015.68 0.00 0.00 1966.29 1653.29 4527.94 00:15:19.288 0 00:15:19.288 18:13:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:19.288 18:13:37 -- host/digest.sh@92 -- # get_accel_stats 00:15:19.288 18:13:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:19.288 18:13:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:19.288 18:13:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:19.288 | select(.opcode=="crc32c") 00:15:19.288 | "\(.module_name) \(.executed)"' 00:15:19.548 18:13:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:19.548 18:13:38 -- host/digest.sh@93 -- # exp_module=software 00:15:19.548 18:13:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:19.548 18:13:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:19.548 18:13:38 -- host/digest.sh@97 -- # killprocess 71715 00:15:19.548 18:13:38 -- common/autotest_common.sh@936 -- # '[' -z 71715 ']' 00:15:19.548 18:13:38 -- common/autotest_common.sh@940 -- # kill -0 71715 00:15:19.548 18:13:38 -- common/autotest_common.sh@941 -- # uname 00:15:19.548 18:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.548 18:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71715 00:15:19.548 18:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:19.548 18:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:19.548 killing process with pid 71715 00:15:19.548 18:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71715' 00:15:19.548 18:13:38 -- common/autotest_common.sh@955 -- # kill 71715 00:15:19.548 Received shutdown signal, test time was about 2.000000 seconds 00:15:19.548 00:15:19.548 Latency(us) 00:15:19.548 [2024-11-18T18:13:38.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.548 [2024-11-18T18:13:38.152Z] =================================================================================================================== 00:15:19.548 [2024-11-18T18:13:38.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.548 18:13:38 -- common/autotest_common.sh@960 -- # wait 71715 00:15:19.807 18:13:38 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:15:19.807 18:13:38 -- host/digest.sh@77 -- # local rw bs qd 00:15:19.807 18:13:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:19.807 18:13:38 -- host/digest.sh@80 -- # rw=randwrite 00:15:19.807 18:13:38 -- host/digest.sh@80 -- # bs=4096 00:15:19.807 18:13:38 -- host/digest.sh@80 -- # qd=128 00:15:19.807 18:13:38 -- host/digest.sh@82 -- # bperfpid=71768 00:15:19.807 18:13:38 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:19.807 18:13:38 -- host/digest.sh@83 -- # waitforlisten 71768 /var/tmp/bperf.sock 00:15:19.807 18:13:38 -- common/autotest_common.sh@829 -- # '[' -z 71768 ']' 00:15:19.807 18:13:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:19.807 18:13:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:19.807 18:13:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:19.807 18:13:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.807 18:13:38 -- common/autotest_common.sh@10 -- # set +x 00:15:19.807 [2024-11-18 18:13:38.323907] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.807 [2024-11-18 18:13:38.324014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71768 ] 00:15:20.066 [2024-11-18 18:13:38.454227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.066 [2024-11-18 18:13:38.505716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.066 18:13:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.066 18:13:38 -- common/autotest_common.sh@862 -- # return 0 00:15:20.066 18:13:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:20.066 18:13:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:20.066 18:13:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:20.326 18:13:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:20.326 18:13:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:20.585 nvme0n1 00:15:20.844 18:13:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:20.844 18:13:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:20.844 Running I/O for 2 seconds... 00:15:22.748 00:15:22.748 Latency(us) 00:15:22.748 [2024-11-18T18:13:41.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.748 [2024-11-18T18:13:41.352Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.748 nvme0n1 : 2.01 17605.48 68.77 0.00 0.00 7264.38 6404.65 15728.64 00:15:22.748 [2024-11-18T18:13:41.352Z] =================================================================================================================== 00:15:22.748 [2024-11-18T18:13:41.352Z] Total : 17605.48 68.77 0.00 0.00 7264.38 6404.65 15728.64 00:15:22.748 0 00:15:23.007 18:13:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:23.007 18:13:41 -- host/digest.sh@92 -- # get_accel_stats 00:15:23.007 18:13:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:23.007 18:13:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:23.007 | select(.opcode=="crc32c") 00:15:23.007 | "\(.module_name) \(.executed)"' 00:15:23.007 18:13:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:23.265 18:13:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:23.265 18:13:41 -- host/digest.sh@93 -- # exp_module=software 00:15:23.265 18:13:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:23.265 18:13:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:23.265 18:13:41 -- host/digest.sh@97 -- # killprocess 71768 00:15:23.266 18:13:41 -- common/autotest_common.sh@936 -- # '[' -z 71768 ']' 00:15:23.266 18:13:41 -- common/autotest_common.sh@940 -- # kill -0 71768 00:15:23.266 18:13:41 -- common/autotest_common.sh@941 -- # uname 00:15:23.266 18:13:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.266 18:13:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71768 00:15:23.266 18:13:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:23.266 18:13:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:23.266 killing process with pid 71768 00:15:23.266 18:13:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71768' 00:15:23.266 18:13:41 -- common/autotest_common.sh@955 -- # kill 71768 00:15:23.266 Received shutdown signal, test time was about 2.000000 seconds 00:15:23.266 00:15:23.266 Latency(us) 00:15:23.266 [2024-11-18T18:13:41.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.266 [2024-11-18T18:13:41.870Z] =================================================================================================================== 00:15:23.266 [2024-11-18T18:13:41.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.266 18:13:41 -- common/autotest_common.sh@960 -- # wait 71768 00:15:23.266 18:13:41 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:15:23.266 18:13:41 -- host/digest.sh@77 -- # local rw bs qd 00:15:23.266 18:13:41 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:23.266 18:13:41 -- host/digest.sh@80 -- # rw=randwrite 00:15:23.266 18:13:41 -- host/digest.sh@80 -- # bs=131072 00:15:23.266 18:13:41 -- host/digest.sh@80 -- # qd=16 00:15:23.266 18:13:41 -- host/digest.sh@82 -- # bperfpid=71816 00:15:23.266 18:13:41 -- host/digest.sh@83 -- # waitforlisten 71816 /var/tmp/bperf.sock 00:15:23.266 18:13:41 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:23.266 18:13:41 -- common/autotest_common.sh@829 -- # '[' -z 71816 ']' 00:15:23.266 18:13:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:23.266 18:13:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.266 18:13:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:23.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:23.266 18:13:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.266 18:13:41 -- common/autotest_common.sh@10 -- # set +x 00:15:23.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:23.525 Zero copy mechanism will not be used. 00:15:23.525 [2024-11-18 18:13:41.896979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:23.525 [2024-11-18 18:13:41.897097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71816 ] 00:15:23.525 [2024-11-18 18:13:42.031257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.525 [2024-11-18 18:13:42.087233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.460 18:13:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.460 18:13:42 -- common/autotest_common.sh@862 -- # return 0 00:15:24.460 18:13:42 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:24.460 18:13:42 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:24.460 18:13:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:24.461 18:13:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:24.461 18:13:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:24.719 nvme0n1 00:15:24.978 18:13:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:24.978 18:13:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:24.978 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:24.978 Zero copy mechanism will not be used. 00:15:24.978 Running I/O for 2 seconds... 00:15:26.881 00:15:26.881 Latency(us) 00:15:26.881 [2024-11-18T18:13:45.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.881 [2024-11-18T18:13:45.485Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:26.881 nvme0n1 : 2.00 6790.08 848.76 0.00 0.00 2351.15 1809.69 10485.76 00:15:26.881 [2024-11-18T18:13:45.485Z] =================================================================================================================== 00:15:26.881 [2024-11-18T18:13:45.485Z] Total : 6790.08 848.76 0.00 0.00 2351.15 1809.69 10485.76 00:15:26.881 0 00:15:26.881 18:13:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:26.881 18:13:45 -- host/digest.sh@92 -- # get_accel_stats 00:15:26.881 18:13:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:26.881 18:13:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:26.881 18:13:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:26.881 | select(.opcode=="crc32c") 00:15:26.881 | "\(.module_name) \(.executed)"' 00:15:27.141 18:13:45 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:27.141 18:13:45 -- host/digest.sh@93 -- # exp_module=software 00:15:27.141 18:13:45 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:27.141 18:13:45 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:27.141 18:13:45 -- host/digest.sh@97 -- # killprocess 71816 00:15:27.141 18:13:45 -- common/autotest_common.sh@936 -- # '[' -z 71816 ']' 00:15:27.141 18:13:45 -- common/autotest_common.sh@940 -- # kill -0 71816 00:15:27.141 18:13:45 -- common/autotest_common.sh@941 -- # uname 00:15:27.141 18:13:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.141 18:13:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71816 00:15:27.400 18:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:27.400 18:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:27.400 killing process with pid 71816 00:15:27.400 18:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71816' 00:15:27.400 Received shutdown signal, test time was about 2.000000 seconds 00:15:27.400 00:15:27.400 Latency(us) 00:15:27.400 [2024-11-18T18:13:46.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.400 [2024-11-18T18:13:46.004Z] =================================================================================================================== 00:15:27.400 [2024-11-18T18:13:46.004Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.400 18:13:45 -- common/autotest_common.sh@955 -- # kill 71816 00:15:27.400 18:13:45 -- common/autotest_common.sh@960 -- # wait 71816 00:15:27.400 18:13:45 -- host/digest.sh@126 -- # killprocess 71636 00:15:27.400 18:13:45 -- common/autotest_common.sh@936 -- # '[' -z 71636 ']' 00:15:27.400 18:13:45 -- common/autotest_common.sh@940 -- # kill -0 71636 00:15:27.400 18:13:45 -- common/autotest_common.sh@941 -- # uname 00:15:27.400 18:13:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.400 18:13:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71636 00:15:27.400 18:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.400 18:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.400 18:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71636' 00:15:27.400 killing process with pid 71636 00:15:27.400 18:13:45 -- common/autotest_common.sh@955 -- # kill 71636 00:15:27.400 18:13:45 -- common/autotest_common.sh@960 -- # wait 71636 00:15:27.659 00:15:27.659 real 0m15.948s 00:15:27.659 user 0m31.214s 00:15:27.659 sys 0m4.223s 00:15:27.659 18:13:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:27.659 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:27.659 ************************************ 00:15:27.659 END TEST nvmf_digest_clean 00:15:27.659 ************************************ 00:15:27.659 18:13:46 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:15:27.659 18:13:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:27.659 18:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.659 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:27.659 ************************************ 00:15:27.659 START TEST nvmf_digest_error 00:15:27.659 ************************************ 00:15:27.659 18:13:46 -- common/autotest_common.sh@1114 -- # run_digest_error 00:15:27.659 18:13:46 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:15:27.659 18:13:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:27.659 18:13:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.659 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:27.659 18:13:46 -- nvmf/common.sh@469 -- # nvmfpid=71905 00:15:27.659 18:13:46 -- nvmf/common.sh@470 -- # waitforlisten 71905 00:15:27.659 18:13:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:27.659 18:13:46 -- common/autotest_common.sh@829 -- # '[' -z 71905 ']' 00:15:27.659 18:13:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.659 18:13:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.659 18:13:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.659 18:13:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.659 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 [2024-11-18 18:13:46.269388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.919 [2024-11-18 18:13:46.270202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.919 [2024-11-18 18:13:46.410432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.919 [2024-11-18 18:13:46.461447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.919 [2024-11-18 18:13:46.461620] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.919 [2024-11-18 18:13:46.461634] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.919 [2024-11-18 18:13:46.461641] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.919 [2024-11-18 18:13:46.461665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.857 18:13:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.857 18:13:47 -- common/autotest_common.sh@862 -- # return 0 00:15:28.857 18:13:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:28.857 18:13:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.857 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:15:28.857 18:13:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.857 18:13:47 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:15:28.857 18:13:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.857 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:15:28.857 [2024-11-18 18:13:47.202181] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:15:28.857 18:13:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.857 18:13:47 -- host/digest.sh@104 -- # common_target_config 00:15:28.857 18:13:47 -- host/digest.sh@43 -- # rpc_cmd 00:15:28.857 18:13:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.857 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:15:28.857 null0 00:15:28.857 [2024-11-18 18:13:47.271824] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.857 [2024-11-18 18:13:47.295952] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.857 18:13:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.857 18:13:47 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:15:28.857 18:13:47 -- host/digest.sh@54 -- # local rw bs qd 00:15:28.857 18:13:47 -- host/digest.sh@56 -- # rw=randread 00:15:28.857 18:13:47 -- host/digest.sh@56 -- # bs=4096 00:15:28.857 18:13:47 -- host/digest.sh@56 -- # qd=128 00:15:28.857 18:13:47 -- host/digest.sh@58 -- # bperfpid=71937 00:15:28.857 18:13:47 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:15:28.857 18:13:47 -- host/digest.sh@60 -- # waitforlisten 71937 /var/tmp/bperf.sock 00:15:28.857 18:13:47 -- common/autotest_common.sh@829 -- # '[' -z 71937 ']' 00:15:28.857 18:13:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:28.857 18:13:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:28.857 18:13:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:28.857 18:13:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.857 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:15:28.857 [2024-11-18 18:13:47.353854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:28.857 [2024-11-18 18:13:47.353969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71937 ] 00:15:29.126 [2024-11-18 18:13:47.485275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.126 [2024-11-18 18:13:47.537140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.078 18:13:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.078 18:13:48 -- common/autotest_common.sh@862 -- # return 0 00:15:30.078 18:13:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:30.078 18:13:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:30.078 18:13:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:30.078 18:13:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.078 18:13:48 -- common/autotest_common.sh@10 -- # set +x 00:15:30.078 18:13:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.078 18:13:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:30.078 18:13:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:30.336 nvme0n1 00:15:30.336 18:13:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:30.336 18:13:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.336 18:13:48 -- common/autotest_common.sh@10 -- # set +x 00:15:30.336 18:13:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.336 18:13:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:30.336 18:13:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:30.594 Running I/O for 2 seconds... 00:15:30.594 [2024-11-18 18:13:48.996332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.594 [2024-11-18 18:13:48.996392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:48.996405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.011798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.011845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.026624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.026671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.026683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.042213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.042278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.042299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.058467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.058505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.075476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.075524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.075536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.091966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.092012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.092024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.108701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.108749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.108761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.124470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.124517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.140125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.140187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.140199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.155767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.155814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.171659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.171707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.171721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.595 [2024-11-18 18:13:49.189328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.595 [2024-11-18 18:13:49.189374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.595 [2024-11-18 18:13:49.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.207002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.207059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.207070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.223318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.223364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.223375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.240272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.240345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.240357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.257629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.257685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.257697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.273571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.273653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.288904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.288950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.288962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.305625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.305671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.305684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.321516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.321569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.321581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.337258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.337303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.337314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.352862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.352896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.352909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.369233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.369280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.369291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.384955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.385014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.400946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.400991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.401002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.416079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.416125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.416136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.431073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.431117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.431129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:30.853 [2024-11-18 18:13:49.446033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:30.853 [2024-11-18 18:13:49.446078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:30.853 [2024-11-18 18:13:49.446089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.461983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.462027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.462038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.477399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.477444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.477456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.492621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.492666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.492677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.508985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.509030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.509041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.524046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.524090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.524101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.540534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.540608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.540621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.555530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.555600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.555612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.571285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.571331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.571342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.586961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.587006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.587017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.602022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.602067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.602078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.617260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.617305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.617316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.633093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.633138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.633149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.648293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.648340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.648351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.663460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.663505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.663515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.678504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.678574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.112 [2024-11-18 18:13:49.678586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.112 [2024-11-18 18:13:49.693507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.112 [2024-11-18 18:13:49.693565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.113 [2024-11-18 18:13:49.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.113 [2024-11-18 18:13:49.708642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.113 [2024-11-18 18:13:49.708687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.113 [2024-11-18 18:13:49.708698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.724348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.724394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.724405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.739604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.739659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.739671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.754044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.754089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.754100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.768722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.768772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.768783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.783141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.783186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.783197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.372 [2024-11-18 18:13:49.798379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.372 [2024-11-18 18:13:49.798414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.372 [2024-11-18 18:13:49.798427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.813484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.813530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.813540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.828968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.829057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.845288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.845333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.845344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.860659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.860703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.860714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.875924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.875983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.875994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.892361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.892408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.892420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.907609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.907656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.922722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.922766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.937726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.937772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.937783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.952692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.952736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.373 [2024-11-18 18:13:49.967633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.373 [2024-11-18 18:13:49.967677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.373 [2024-11-18 18:13:49.967688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:49.990380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:49.990428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:49.990440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.006131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.006182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.006221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.024236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.024289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.024313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.043009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.043064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.058595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.058657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.058670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.074865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.074914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.074942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.092018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.092065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.108516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.108617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.124136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.124181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.124192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.139148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.139250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.139263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.154368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.154415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.154426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.168722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.168766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.168777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.183106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.183150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.183161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.200099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.200137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.632 [2024-11-18 18:13:50.217199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.632 [2024-11-18 18:13:50.217245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.632 [2024-11-18 18:13:50.217256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.234184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.234254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.251979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.252042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.252054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.268880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.268928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.268955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.284265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.284311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.299293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.299338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.299349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.314409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.314440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.314451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.329886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.329966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.329979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.345274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.345320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.345332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.360441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.360485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.360495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.375026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.375100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.375112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.389786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.389836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.389848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.406293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.406326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.406339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.423102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.423133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.423144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.438357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.438387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.438398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.453229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.453268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.468204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.468263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.468275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:31.892 [2024-11-18 18:13:50.483527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:31.892 [2024-11-18 18:13:50.483588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:31.892 [2024-11-18 18:13:50.483601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.499344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.499388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.499399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.515768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.515816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.515837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.534666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.534702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.534715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.550923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.550968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.550979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.567031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.567075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.567086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.581307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.581350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.581362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.595973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.596022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.596050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.611069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.611101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.611128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.625516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.625558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.625585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.639830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.639863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.152 [2024-11-18 18:13:50.639875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.152 [2024-11-18 18:13:50.654314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.152 [2024-11-18 18:13:50.654347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.654359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.668628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.668665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.668676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.682875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.682906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.682933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.697520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.713008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.713205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.713221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.727682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.727864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.727881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.153 [2024-11-18 18:13:50.742322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.153 [2024-11-18 18:13:50.742495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.153 [2024-11-18 18:13:50.742512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.757741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.757776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.757804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.772340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.772520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.772551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.787095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.787296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.787417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.802194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.802453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.802660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.817284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.817483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.817615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.832149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.832463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.847130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.847331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.847447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.862060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.862296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.862420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.877374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.877580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.877700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.892403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.892622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.892787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.907380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.413 [2024-11-18 18:13:50.907414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.413 [2024-11-18 18:13:50.907442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.413 [2024-11-18 18:13:50.921601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.414 [2024-11-18 18:13:50.921780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.414 [2024-11-18 18:13:50.921796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.414 [2024-11-18 18:13:50.936188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.414 [2024-11-18 18:13:50.936221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.414 [2024-11-18 18:13:50.936247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.414 [2024-11-18 18:13:50.950536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.414 [2024-11-18 18:13:50.950578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.414 [2024-11-18 18:13:50.950591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.414 [2024-11-18 18:13:50.965212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaa3d40) 00:15:32.414 [2024-11-18 18:13:50.965251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:32.414 [2024-11-18 18:13:50.965279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:32.414 00:15:32.414 Latency(us) 00:15:32.414 [2024-11-18T18:13:51.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.414 [2024-11-18T18:13:51.018Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:32.414 nvme0n1 : 2.00 16231.23 63.40 0.00 0.00 7881.43 6940.86 30146.56 00:15:32.414 [2024-11-18T18:13:51.018Z] =================================================================================================================== 00:15:32.414 [2024-11-18T18:13:51.018Z] Total : 16231.23 63.40 0.00 0.00 7881.43 6940.86 30146.56 00:15:32.414 0 00:15:32.414 18:13:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:32.414 18:13:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:32.414 | .driver_specific 00:15:32.414 | .nvme_error 00:15:32.414 | .status_code 00:15:32.414 | .command_transient_transport_error' 00:15:32.414 18:13:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:32.414 18:13:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:32.982 18:13:51 -- host/digest.sh@71 -- # (( 127 > 0 )) 00:15:32.982 18:13:51 -- host/digest.sh@73 -- # killprocess 71937 00:15:32.982 18:13:51 -- common/autotest_common.sh@936 -- # '[' -z 71937 ']' 00:15:32.982 18:13:51 -- common/autotest_common.sh@940 -- # kill -0 71937 00:15:32.982 18:13:51 -- common/autotest_common.sh@941 -- # uname 00:15:32.982 18:13:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.982 18:13:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71937 00:15:32.982 killing process with pid 71937 00:15:32.982 Received shutdown signal, test time was about 2.000000 seconds 00:15:32.982 00:15:32.982 Latency(us) 00:15:32.982 [2024-11-18T18:13:51.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.982 [2024-11-18T18:13:51.586Z] =================================================================================================================== 00:15:32.982 [2024-11-18T18:13:51.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.983 18:13:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.983 18:13:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.983 18:13:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71937' 00:15:32.983 18:13:51 -- common/autotest_common.sh@955 -- # kill 71937 00:15:32.983 18:13:51 -- common/autotest_common.sh@960 -- # wait 71937 00:15:32.983 18:13:51 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:15:32.983 18:13:51 -- host/digest.sh@54 -- # local rw bs qd 00:15:32.983 18:13:51 -- host/digest.sh@56 -- # rw=randread 00:15:32.983 18:13:51 -- host/digest.sh@56 -- # bs=131072 00:15:32.983 18:13:51 -- host/digest.sh@56 -- # qd=16 00:15:32.983 18:13:51 -- host/digest.sh@58 -- # bperfpid=71997 00:15:32.983 18:13:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:15:32.983 18:13:51 -- host/digest.sh@60 -- # waitforlisten 71997 /var/tmp/bperf.sock 00:15:32.983 18:13:51 -- common/autotest_common.sh@829 -- # '[' -z 71997 ']' 00:15:32.983 18:13:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:32.983 18:13:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.983 18:13:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:32.983 18:13:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.983 18:13:51 -- common/autotest_common.sh@10 -- # set +x 00:15:33.242 [2024-11-18 18:13:51.615633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:33.242 [2024-11-18 18:13:51.615972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:15:33.242 Zero copy mechanism will not be used. 00:15:33.242 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71997 ] 00:15:33.242 [2024-11-18 18:13:51.749091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.242 [2024-11-18 18:13:51.799132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.179 18:13:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.179 18:13:52 -- common/autotest_common.sh@862 -- # return 0 00:15:34.180 18:13:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:34.180 18:13:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:34.438 18:13:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:34.438 18:13:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.438 18:13:52 -- common/autotest_common.sh@10 -- # set +x 00:15:34.438 18:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.438 18:13:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:34.439 18:13:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:34.698 nvme0n1 00:15:34.698 18:13:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:34.698 18:13:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.698 18:13:53 -- common/autotest_common.sh@10 -- # set +x 00:15:34.698 18:13:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.698 18:13:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:34.698 18:13:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:34.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:34.959 Zero copy mechanism will not be used. 00:15:34.959 Running I/O for 2 seconds... 00:15:34.959 [2024-11-18 18:13:53.315264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.315331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.315346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.319755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.319795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.319809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.324162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.324199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.324228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.328498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.328561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.328592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.332778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.332815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.332844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.336851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.337073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.337091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.341081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.341119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.341148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.345109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.345144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.349013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.349048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.349076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.353037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.353071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.353100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.357131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.357166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.357195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.361200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.361235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.361263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.365195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.365230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.365258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.369215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.369250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.369278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.373281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.373316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.373345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.377260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.377295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.377323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.381300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.381335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.381363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.385342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.389388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.389422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.389450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.393445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.393480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.393508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.397484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.397519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.397557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.959 [2024-11-18 18:13:53.401493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.959 [2024-11-18 18:13:53.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.959 [2024-11-18 18:13:53.401570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.405905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.405969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.410390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.410429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.414805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.414841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.419283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.419319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.419348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.424015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.424208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.424226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.428648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.428689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.428704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.433363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.433402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.433431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.438232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.438274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.438289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.442998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.443193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.443211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.449149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.449221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.449244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.455361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.455417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.455447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.460100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.460139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.460169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.464929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.464992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.465020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.468974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.469009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.469038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.473431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.473469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.473498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.477814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.477850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.477881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.482022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.482232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.482251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.486458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.486499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.486526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.490668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.490703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.490732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.494822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.494857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.494885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.498922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.498970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.498998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.502986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.503020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.503048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.507087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.507121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.507149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.511083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.511117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.514984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.515018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.515046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.960 [2024-11-18 18:13:53.519011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.960 [2024-11-18 18:13:53.519045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.960 [2024-11-18 18:13:53.519073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.522944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.522977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.523005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.526910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.526944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.526972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.531009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.531073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.534874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.534908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.534936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.538984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.539017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.539046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.543149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.543183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.547092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.547126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.547154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.551083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.551117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.551144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.555155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.555235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.961 [2024-11-18 18:13:53.559465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:34.961 [2024-11-18 18:13:53.559501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:34.961 [2024-11-18 18:13:53.559529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.563904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.563998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.568345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.568382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.568411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.572426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.572489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.576508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.576554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.576582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.580526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.580569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.580596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.584511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.584558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.584585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.588470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.222 [2024-11-18 18:13:53.588504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.222 [2024-11-18 18:13:53.588532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.222 [2024-11-18 18:13:53.592564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.592595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.592607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.597072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.597258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.597275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.601320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.601487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.601504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.605670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.605705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.605733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.609637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.609698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.613624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.613658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.613686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.617567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.617612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.617640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.621420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.621619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.621636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.625627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.625662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.625690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.629483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.629678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.629696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.633781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.633829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.633843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.637740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.637776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.637804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.641673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.641707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.645603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.645637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.645666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.649494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.649715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.649733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.653783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.653819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.653848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.657775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.657809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.657838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.661519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.661589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.661618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.665430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.665628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.665645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.669464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.669494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.669522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.673436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.673670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.673799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.678187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.678418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.678685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.683204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.683375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.683496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.688167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.688351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.688624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.693293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.693500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.693757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.698262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.698449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.698632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.703114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.223 [2024-11-18 18:13:53.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.223 [2024-11-18 18:13:53.703436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.223 [2024-11-18 18:13:53.707818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.708032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.708165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.712656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.712808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.712826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.717302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.717508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.717719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.722294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.722469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.722692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.727228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.727266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.727294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.731482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.731519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.731558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.735575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.735610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.735638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.739777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.739814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.739842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.744001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.744037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.748024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.748087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.752168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.752234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.756290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.756325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.756370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.760395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.760430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.760459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.764570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.764605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.764634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.768722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.768758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.768786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.772806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.772841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.772869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.776868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.776903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.776931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.781050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.781086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.781115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.785207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.785242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.785270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.789339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.789374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.789403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.793623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.793658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.793686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.797652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.797687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.797715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.801668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.801703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.801731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.805857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.805892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.805921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.809901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.809937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.809966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.814034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.814069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.814097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.224 [2024-11-18 18:13:53.818328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.224 [2024-11-18 18:13:53.818368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.224 [2024-11-18 18:13:53.818382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.822828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.822865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.822895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.827131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.827166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.827194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.831497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.831560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.831574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.835658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.835693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.839805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.839840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.839869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.844008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.844043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.848042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.848076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.848104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.852150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.852185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.852213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.856104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.856139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.856167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.860187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.860249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.864277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.864312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.864340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.868256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.868290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.872387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.872422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.872449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.876378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.876412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.876440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.880618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.880653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.880681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.884917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.884995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.889397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.889460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.893834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.893873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.893887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.898224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.902608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.902659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.902688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.906896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.486 [2024-11-18 18:13:53.906961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.486 [2024-11-18 18:13:53.906989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.486 [2024-11-18 18:13:53.911219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.911254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.911282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.915507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.915570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.915599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.919532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.919608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.923610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.923672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.927837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.927870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.927897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.931866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.931901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.931929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.935838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.935872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.935900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.941278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.941346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.941369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.946659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.946698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.946727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.950588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.950639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.950667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.954810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.958830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.958866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.958895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.962861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.962897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.962940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.966968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.967017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.967045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.971232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.971267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.971295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.975584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.975636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.975665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.979994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.980045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.980059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.984178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.984213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.984242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.988295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.988330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.988358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.992461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.992497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.992524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:53.996742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:53.996776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:53.996805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.000640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.000674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:54.000701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.004715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.004750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:54.004778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.008942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.008978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:54.008991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.012979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.013013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:54.013041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.016990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.017025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.487 [2024-11-18 18:13:54.017053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.487 [2024-11-18 18:13:54.021186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.487 [2024-11-18 18:13:54.021221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.021249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.025227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.025292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.029319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.029355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.029383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.033520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.033581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.033613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.037706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.037742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.037771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.041691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.041724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.045611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.045644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.045671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.049480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.049685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.049701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.053596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.053630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.053658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.057537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.057738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.057755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.061793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.061827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.061855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.065711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.065746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.065774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.069557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.069590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.069618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.073568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.073767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.073783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.077641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.077703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.488 [2024-11-18 18:13:54.081645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.488 [2024-11-18 18:13:54.081681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.488 [2024-11-18 18:13:54.081709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.085797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.085831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.085859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.089861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.089895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.089923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.093966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.094003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.094031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.097973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.098007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.098035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.101894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.101927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.101955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.105903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.105937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.105965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.109919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.109953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.109980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.113888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.113922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.113949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.117881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.117915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.117942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.121906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.121940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.121969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.125886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.749 [2024-11-18 18:13:54.125920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.749 [2024-11-18 18:13:54.125948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.749 [2024-11-18 18:13:54.129792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.129826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.129854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.133759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.133793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.137674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.137708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.137735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.141649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.141683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.141711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.145600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.145634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.145662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.149575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.149609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.153529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.153727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.153744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.157802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.157838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.157866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.161753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.161786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.161813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.165681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.165716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.165744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.169806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.169841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.169870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.173801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.173834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.177696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.177729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.177756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.182072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.182107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.182132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.186673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.186709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.191026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.191060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.191088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.195254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.195288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.195317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.199237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.199271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.199300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.203211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.203245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.203274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.207242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.207276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.207304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.211301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.211364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.215311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.215345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.750 [2024-11-18 18:13:54.215374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.750 [2024-11-18 18:13:54.219373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.750 [2024-11-18 18:13:54.219407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.219436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.223377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.223412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.223440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.227305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.227340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.227369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.231346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.231380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.235355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.235390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.235419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.239390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.239424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.239452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.243331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.243365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.243393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.247373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.247408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.247436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.251418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.251453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.251481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.255545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.255609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.255623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.259589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.259624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.259652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.263534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.263598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.263628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.267738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.267773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.267802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.271896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.271947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.271975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.276072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.276107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.276136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.280430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.280477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.280505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.284988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.285084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.285116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.289457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.289492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.289521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.293795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.293834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.293863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.298244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.298283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.298298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.302690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.302735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.302765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.306918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.306979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.311223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.311259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.311288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.315464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.315499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.751 [2024-11-18 18:13:54.315527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.751 [2024-11-18 18:13:54.320062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.751 [2024-11-18 18:13:54.320255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.320272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.324744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.324782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.324812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.329365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.329402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.333881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.333920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.333969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.338371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.338411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.338425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.342996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.343031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.343059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:35.752 [2024-11-18 18:13:54.347812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:35.752 [2024-11-18 18:13:54.347851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:35.752 [2024-11-18 18:13:54.347865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.352310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.352346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.352358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.356728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.356781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.356795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.361003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.361049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.361078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.365064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.365098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.365127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.369109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.369144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.369172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.373050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.373111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.377104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.377165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.381021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.381055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.381083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.385054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.385088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.385116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.389213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.389247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.389275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.393176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.393211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.393239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.397455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.397492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.397505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.402081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.402118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.402131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.406625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.406660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.406690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.411131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.411165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.411193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.415479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.415515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.419877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.419912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.419955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.424050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.424086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.424114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.428201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.428237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.428266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.433762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.013 [2024-11-18 18:13:54.433831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.013 [2024-11-18 18:13:54.433853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.013 [2024-11-18 18:13:54.438813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.438852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.438881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.442966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.443002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.443030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.447059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.447094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.447123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.451062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.451096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.451125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.455077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.455112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.455141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.459196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.459231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.459260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.463288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.463323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.463352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.467462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.467496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.467525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.471494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.471558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.471572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.475574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.475608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.475636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.479622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.479655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.479683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.483632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.483665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.483693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.487659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.487693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.487721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.491756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.491790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.491818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.495746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.495779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.495807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.499675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.499708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.499736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.503636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.503698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.507683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.507717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.507744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.511767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.511801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.511829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.515819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.515853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.515881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.519871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.519904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.519933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.523865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.523899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.523928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.527876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.527910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.531972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.532005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.532034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.535988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.536022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.536050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.540116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.540150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.014 [2024-11-18 18:13:54.540179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.014 [2024-11-18 18:13:54.544187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.014 [2024-11-18 18:13:54.544221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.544251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.548206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.548240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.548268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.552261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.552296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.552323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.556311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.556345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.556373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.560319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.560354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.560382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.564194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.564256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.568247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.568311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.572362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.572398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.572426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.576433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.576467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.580546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.580594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.580622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.584496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.584556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.584586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.588723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.588759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.588788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.592768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.592803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.592832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.596878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.596914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.596943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.601333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.601368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.601396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.605933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.606183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.015 [2024-11-18 18:13:54.610748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.015 [2024-11-18 18:13:54.610788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.015 [2024-11-18 18:13:54.610818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.615306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.275 [2024-11-18 18:13:54.615341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.275 [2024-11-18 18:13:54.615370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.620134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.275 [2024-11-18 18:13:54.620173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.275 [2024-11-18 18:13:54.620203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.624774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.275 [2024-11-18 18:13:54.624812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.275 [2024-11-18 18:13:54.624826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.629300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.275 [2024-11-18 18:13:54.629336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.275 [2024-11-18 18:13:54.629381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.633610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.275 [2024-11-18 18:13:54.633663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.275 [2024-11-18 18:13:54.633688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.275 [2024-11-18 18:13:54.638121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.638171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.638229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.642794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.642833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.642863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.647343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.647379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.647408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.651844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.651882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.651912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.656474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.656508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.656536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.660909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.660988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.661016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.665315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.665350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.665378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.669737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.669790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.674262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.674315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.678856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.678893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.678938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.683323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.683358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.683385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.687737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.687772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.687805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.692097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.692131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.692160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.696205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.696239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.700257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.700292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.700320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.704325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.704359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.704387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.708443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.708477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.708505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.712530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.712586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.712600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.716645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.716678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.716705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.720729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.720762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.720789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.724716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.724748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.724777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.728707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.728740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.728768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.732717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.732750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.736737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.736770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.736797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.740760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.740794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.744684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.744716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.744744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.276 [2024-11-18 18:13:54.748703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.276 [2024-11-18 18:13:54.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.276 [2024-11-18 18:13:54.748763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.752699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.752733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.752761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.756650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.756683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.756710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.760720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.760754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.760781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.764734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.764768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.764795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.768786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.768820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.772770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.772804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.772832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.776801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.776848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.776861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.780771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.780805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.780832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.784749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.784783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.784810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.788710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.788770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.792668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.792701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.792729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.796656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.796689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.796717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.800657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.800690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.800718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.804733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.804765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.804793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.808713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.808746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.808774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.812757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.812790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.812818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.816761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.816794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.816822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.820813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.820846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.820874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.824801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.824835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.824862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.828790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.828825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.828853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.832780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.832815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.832843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.837204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.837239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.837267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.841482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.841518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.841573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.845856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.845891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.845933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.850114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.850149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.850178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.854483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.854522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.854581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.277 [2024-11-18 18:13:54.858843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.277 [2024-11-18 18:13:54.858879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.277 [2024-11-18 18:13:54.858920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.278 [2024-11-18 18:13:54.863077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.278 [2024-11-18 18:13:54.863111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.278 [2024-11-18 18:13:54.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.278 [2024-11-18 18:13:54.867278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.278 [2024-11-18 18:13:54.867313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.278 [2024-11-18 18:13:54.867341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.278 [2024-11-18 18:13:54.871869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.278 [2024-11-18 18:13:54.871939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.278 [2024-11-18 18:13:54.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.876620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.876700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.880903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.881124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.881157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.885469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.885524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.885549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.889642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.889677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.889705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.893657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.893691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.893719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.897831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.897866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.897894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.901870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.901905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.901932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.906146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.906180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.906234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.910801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.910837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.910867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.915028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.915063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.915091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.919921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.920018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.920040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.925689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.925758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.925779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.930052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.930091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.930120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.934220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.934274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.934304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.938455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.938495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.938510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.942637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.942672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.942699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.946740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.946775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.946804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.951079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.951114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.951142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.955240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.955274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.955303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.959382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.959418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.959446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.538 [2024-11-18 18:13:54.963709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.538 [2024-11-18 18:13:54.963746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.538 [2024-11-18 18:13:54.963775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.967883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.967920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.967964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.972102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.972136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.972164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.976449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.976484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.976512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.980656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.980690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.980718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.984754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.984805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.984819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.989122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.989158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.989187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.993205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.993239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.993267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:54.997312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:54.997346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:54.997374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.001388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.001423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.001452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.005324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.005359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.005387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.009315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.009349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.009378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.013380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.013414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.013442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.017350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.017384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.017411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.021426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.021460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.021488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.025440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.025475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.025502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.029501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.029577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.029592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.033599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.033633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.033662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.037833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.037869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.037898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.042011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.042049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.042063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.046669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.046706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.046735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.051240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.051290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.051318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.055691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.055730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.055744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.060173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.060208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.060237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.064441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.064476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.064504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.068769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.068804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.068833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.072997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.539 [2024-11-18 18:13:55.073032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.539 [2024-11-18 18:13:55.073060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.539 [2024-11-18 18:13:55.077106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.077141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.077169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.081200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.081235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.081262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.085421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.085458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.089519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.089598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.089628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.093585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.093619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.093648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.097860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.097897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.097911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.101891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.101926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.101969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.106049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.106084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.106112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.110578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.110640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.110655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.114825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.114860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.114888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.118885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.118919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.118947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.123209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.123244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.123274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.127364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.127398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.127426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.131664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.131727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.540 [2024-11-18 18:13:55.136044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.540 [2024-11-18 18:13:55.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.540 [2024-11-18 18:13:55.136133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.800 [2024-11-18 18:13:55.140386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.800 [2024-11-18 18:13:55.140423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.800 [2024-11-18 18:13:55.140452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.800 [2024-11-18 18:13:55.144595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.800 [2024-11-18 18:13:55.144630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.800 [2024-11-18 18:13:55.144658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.800 [2024-11-18 18:13:55.148794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.800 [2024-11-18 18:13:55.148830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.800 [2024-11-18 18:13:55.148844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.800 [2024-11-18 18:13:55.152844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.800 [2024-11-18 18:13:55.152878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.800 [2024-11-18 18:13:55.152905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.156891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.156925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.156954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.160908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.160943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.160971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.164981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.165020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.165047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.169109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.169144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.173142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.173176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.177186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.177221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.177249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.181208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.181243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.181270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.185330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.185366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.185379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.189439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.189476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.189488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.193664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.193698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.193726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.197639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.197672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.197700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.201630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.201662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.201689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.205488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.205522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.205564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.209433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.209468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.209496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.213445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.213480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.213507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.217474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.217508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.217536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.221497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.221554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.221583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.225433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.225467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.225495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.229457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.229492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.229519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.233675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.233709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.233737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.237698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.237732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.237760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.241774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.241809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.241837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.245867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.245901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.245929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.249806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.249840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.249868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.253812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.253846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.253874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.257841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.257876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.801 [2024-11-18 18:13:55.257903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.801 [2024-11-18 18:13:55.261925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.801 [2024-11-18 18:13:55.261976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.262004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.266032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.266095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.270084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.270118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.270146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.274100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.274134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.274161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.278144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.278178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.278248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.282099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.282133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.282161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.286145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.286179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.286248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.290509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.290609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.294842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.294907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.294936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.299240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.299293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.299322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.303731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.303769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.303798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:36.802 [2024-11-18 18:13:55.308373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1911940) 00:15:36.802 [2024-11-18 18:13:55.308411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:36.802 [2024-11-18 18:13:55.308441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:36.802 00:15:36.802 Latency(us) 00:15:36.802 [2024-11-18T18:13:55.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.802 [2024-11-18T18:13:55.406Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:15:36.802 nvme0n1 : 2.00 7368.55 921.07 0.00 0.00 2168.41 1675.64 6613.18 00:15:36.802 [2024-11-18T18:13:55.406Z] =================================================================================================================== 00:15:36.802 [2024-11-18T18:13:55.406Z] Total : 7368.55 921.07 0.00 0.00 2168.41 1675.64 6613.18 00:15:36.802 0 00:15:36.802 18:13:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:36.802 18:13:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:36.802 18:13:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:36.802 18:13:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:36.802 | .driver_specific 00:15:36.802 | .nvme_error 00:15:36.802 | .status_code 00:15:36.802 | .command_transient_transport_error' 00:15:37.062 18:13:55 -- host/digest.sh@71 -- # (( 475 > 0 )) 00:15:37.062 18:13:55 -- host/digest.sh@73 -- # killprocess 71997 00:15:37.062 18:13:55 -- common/autotest_common.sh@936 -- # '[' -z 71997 ']' 00:15:37.062 18:13:55 -- common/autotest_common.sh@940 -- # kill -0 71997 00:15:37.062 18:13:55 -- common/autotest_common.sh@941 -- # uname 00:15:37.062 18:13:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.062 18:13:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71997 00:15:37.062 killing process with pid 71997 00:15:37.062 Received shutdown signal, test time was about 2.000000 seconds 00:15:37.062 00:15:37.062 Latency(us) 00:15:37.062 [2024-11-18T18:13:55.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.062 [2024-11-18T18:13:55.666Z] =================================================================================================================== 00:15:37.062 [2024-11-18T18:13:55.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.062 18:13:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:37.062 18:13:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:37.062 18:13:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71997' 00:15:37.062 18:13:55 -- common/autotest_common.sh@955 -- # kill 71997 00:15:37.062 18:13:55 -- common/autotest_common.sh@960 -- # wait 71997 00:15:37.322 18:13:55 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:15:37.322 18:13:55 -- host/digest.sh@54 -- # local rw bs qd 00:15:37.322 18:13:55 -- host/digest.sh@56 -- # rw=randwrite 00:15:37.322 18:13:55 -- host/digest.sh@56 -- # bs=4096 00:15:37.322 18:13:55 -- host/digest.sh@56 -- # qd=128 00:15:37.322 18:13:55 -- host/digest.sh@58 -- # bperfpid=72052 00:15:37.322 18:13:55 -- host/digest.sh@60 -- # waitforlisten 72052 /var/tmp/bperf.sock 00:15:37.322 18:13:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:15:37.322 18:13:55 -- common/autotest_common.sh@829 -- # '[' -z 72052 ']' 00:15:37.322 18:13:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:37.322 18:13:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.322 18:13:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:37.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:37.322 18:13:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.322 18:13:55 -- common/autotest_common.sh@10 -- # set +x 00:15:37.322 [2024-11-18 18:13:55.856771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:37.322 [2024-11-18 18:13:55.857103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72052 ] 00:15:37.581 [2024-11-18 18:13:55.991433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.581 [2024-11-18 18:13:56.047044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.519 18:13:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.519 18:13:56 -- common/autotest_common.sh@862 -- # return 0 00:15:38.519 18:13:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:38.519 18:13:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:38.519 18:13:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:38.519 18:13:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.519 18:13:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.519 18:13:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.519 18:13:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:38.519 18:13:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:38.778 nvme0n1 00:15:38.778 18:13:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:15:38.778 18:13:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.778 18:13:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.778 18:13:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.778 18:13:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:38.778 18:13:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:39.037 Running I/O for 2 seconds... 00:15:39.037 [2024-11-18 18:13:57.490909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ddc00 00:15:39.037 [2024-11-18 18:13:57.492281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.492324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.506489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fef90 00:15:39.037 [2024-11-18 18:13:57.508299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.508513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.521934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ff3c8 00:15:39.037 [2024-11-18 18:13:57.523465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.523694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.537723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190feb58 00:15:39.037 [2024-11-18 18:13:57.539375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.539412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.552527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fe720 00:15:39.037 [2024-11-18 18:13:57.553862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.553895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.566986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fe2e8 00:15:39.037 [2024-11-18 18:13:57.568330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.568363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.581375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fdeb0 00:15:39.037 [2024-11-18 18:13:57.582760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.582791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.595881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fda78 00:15:39.037 [2024-11-18 18:13:57.597395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.597430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.610453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fd640 00:15:39.037 [2024-11-18 18:13:57.611856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.611884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:39.037 [2024-11-18 18:13:57.624974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fd208 00:15:39.037 [2024-11-18 18:13:57.626232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.037 [2024-11-18 18:13:57.626265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.639946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fcdd0 00:15:39.297 [2024-11-18 18:13:57.641260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.641430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.654766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fc998 00:15:39.297 [2024-11-18 18:13:57.656009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.656055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.669133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fc560 00:15:39.297 [2024-11-18 18:13:57.670386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.683395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fc128 00:15:39.297 [2024-11-18 18:13:57.684627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.684830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.698755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fbcf0 00:15:39.297 [2024-11-18 18:13:57.700073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.700109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.715018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fb8b8 00:15:39.297 [2024-11-18 18:13:57.716292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.716325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.729543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fb480 00:15:39.297 [2024-11-18 18:13:57.730799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.730831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.743798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fb048 00:15:39.297 [2024-11-18 18:13:57.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.745046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.758086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fac10 00:15:39.297 [2024-11-18 18:13:57.759279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.759326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.772393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fa7d8 00:15:39.297 [2024-11-18 18:13:57.773588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.773643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.787029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190fa3a0 00:15:39.297 [2024-11-18 18:13:57.788211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.788243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.801934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f9f68 00:15:39.297 [2024-11-18 18:13:57.803135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.816391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f9b30 00:15:39.297 [2024-11-18 18:13:57.817697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.817725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.830843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f96f8 00:15:39.297 [2024-11-18 18:13:57.832007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.832039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.845010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f92c0 00:15:39.297 [2024-11-18 18:13:57.846116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.859318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f8e88 00:15:39.297 [2024-11-18 18:13:57.860439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.860758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.874901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f8a50 00:15:39.297 [2024-11-18 18:13:57.876185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.876392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:39.297 [2024-11-18 18:13:57.889648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f8618 00:15:39.297 [2024-11-18 18:13:57.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.297 [2024-11-18 18:13:57.891138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.905380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f81e0 00:15:39.557 [2024-11-18 18:13:57.906734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.906935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.920261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f7da8 00:15:39.557 [2024-11-18 18:13:57.921506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.921730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.934706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f7970 00:15:39.557 [2024-11-18 18:13:57.935910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.936110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.950730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f7538 00:15:39.557 [2024-11-18 18:13:57.952087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.952292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.967379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f7100 00:15:39.557 [2024-11-18 18:13:57.968708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.968891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:57.984033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f6cc8 00:15:39.557 [2024-11-18 18:13:57.985284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:57.985501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.000719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f6890 00:15:39.557 [2024-11-18 18:13:58.002017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.002219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.017309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f6458 00:15:39.557 [2024-11-18 18:13:58.018378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.018416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.032912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f6020 00:15:39.557 [2024-11-18 18:13:58.034009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.034042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.047928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f5be8 00:15:39.557 [2024-11-18 18:13:58.048904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.048937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.062751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f57b0 00:15:39.557 [2024-11-18 18:13:58.063745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.063794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.077007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f5378 00:15:39.557 [2024-11-18 18:13:58.077950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.078117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.091274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f4f40 00:15:39.557 [2024-11-18 18:13:58.092287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.092319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.105667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f4b08 00:15:39.557 [2024-11-18 18:13:58.106875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.107068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.120434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f46d0 00:15:39.557 [2024-11-18 18:13:58.121388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.121422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.134643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f4298 00:15:39.557 [2024-11-18 18:13:58.135584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.135645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:39.557 [2024-11-18 18:13:58.149053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f3e60 00:15:39.557 [2024-11-18 18:13:58.149956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.557 [2024-11-18 18:13:58.150114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.164321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f3a28 00:15:39.817 [2024-11-18 18:13:58.165220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.165377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.178824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f35f0 00:15:39.817 [2024-11-18 18:13:58.179746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.179780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.193078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f31b8 00:15:39.817 [2024-11-18 18:13:58.193980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.194012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.207507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f2d80 00:15:39.817 [2024-11-18 18:13:58.208427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.208460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.221933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f2948 00:15:39.817 [2024-11-18 18:13:58.223143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.223193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.236505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f2510 00:15:39.817 [2024-11-18 18:13:58.237364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.237397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.250551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f20d8 00:15:39.817 [2024-11-18 18:13:58.251451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.251499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.264332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f1ca0 00:15:39.817 [2024-11-18 18:13:58.265149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.265182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.279079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f1868 00:15:39.817 [2024-11-18 18:13:58.280105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.280132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.294313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f1430 00:15:39.817 [2024-11-18 18:13:58.295285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.295315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.311243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f0ff8 00:15:39.817 [2024-11-18 18:13:58.312094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.312155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.326792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f0bc0 00:15:39.817 [2024-11-18 18:13:58.327588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.327663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.341587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f0788 00:15:39.817 [2024-11-18 18:13:58.342386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.342421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.357880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190f0350 00:15:39.817 [2024-11-18 18:13:58.358736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.358776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.374203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eff18 00:15:39.817 [2024-11-18 18:13:58.375010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.375046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.389681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190efae0 00:15:39.817 [2024-11-18 18:13:58.390477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.390514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:39.817 [2024-11-18 18:13:58.406107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ef6a8 00:15:39.817 [2024-11-18 18:13:58.406919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.817 [2024-11-18 18:13:58.406954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.423148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ef270 00:15:40.077 [2024-11-18 18:13:58.423963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.424012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.438647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eee38 00:15:40.077 [2024-11-18 18:13:58.439403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.439435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.453792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eea00 00:15:40.077 [2024-11-18 18:13:58.454613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.454646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.468337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ee5c8 00:15:40.077 [2024-11-18 18:13:58.469102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.469135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.482620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ee190 00:15:40.077 [2024-11-18 18:13:58.483327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.483358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.497215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190edd58 00:15:40.077 [2024-11-18 18:13:58.498036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.498068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.513121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ed920 00:15:40.077 [2024-11-18 18:13:58.513850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.513886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.529648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ed4e8 00:15:40.077 [2024-11-18 18:13:58.530364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.530401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.544907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ed0b0 00:15:40.077 [2024-11-18 18:13:58.545667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.077 [2024-11-18 18:13:58.545713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:40.077 [2024-11-18 18:13:58.560181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ecc78 00:15:40.078 [2024-11-18 18:13:58.560860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.576080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ec840 00:15:40.078 [2024-11-18 18:13:58.576784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.576813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.591287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ec408 00:15:40.078 [2024-11-18 18:13:58.591948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.591993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.606601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ebfd0 00:15:40.078 [2024-11-18 18:13:58.607275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.607320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.621649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ebb98 00:15:40.078 [2024-11-18 18:13:58.622301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.622341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.636489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eb760 00:15:40.078 [2024-11-18 18:13:58.637151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.637203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.651299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eb328 00:15:40.078 [2024-11-18 18:13:58.651892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.651923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:40.078 [2024-11-18 18:13:58.666373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eaef0 00:15:40.078 [2024-11-18 18:13:58.667001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.078 [2024-11-18 18:13:58.667032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.681849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190eaab8 00:15:40.338 [2024-11-18 18:13:58.682468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.682498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.696365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ea680 00:15:40.338 [2024-11-18 18:13:58.696924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.696954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.712305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190ea248 00:15:40.338 [2024-11-18 18:13:58.712909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.712968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.727238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e9e10 00:15:40.338 [2024-11-18 18:13:58.727786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.727816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.741310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e99d8 00:15:40.338 [2024-11-18 18:13:58.741846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.741875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.755519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e95a0 00:15:40.338 [2024-11-18 18:13:58.756052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.756080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.769688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e9168 00:15:40.338 [2024-11-18 18:13:58.770191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.783935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e8d30 00:15:40.338 [2024-11-18 18:13:58.784431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.784460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.798083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e88f8 00:15:40.338 [2024-11-18 18:13:58.798643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.798673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.812429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e84c0 00:15:40.338 [2024-11-18 18:13:58.812919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.812948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.827159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e8088 00:15:40.338 [2024-11-18 18:13:58.827640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.827669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.842876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e7c50 00:15:40.338 [2024-11-18 18:13:58.843362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.843409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.857300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e7818 00:15:40.338 [2024-11-18 18:13:58.857756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.857789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.871827] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e73e0 00:15:40.338 [2024-11-18 18:13:58.872266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.872295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.886174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e6fa8 00:15:40.338 [2024-11-18 18:13:58.886678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.338 [2024-11-18 18:13:58.886709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:40.338 [2024-11-18 18:13:58.900775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e6b70 00:15:40.339 [2024-11-18 18:13:58.901320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.339 [2024-11-18 18:13:58.901351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:40.339 [2024-11-18 18:13:58.916705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e6738 00:15:40.339 [2024-11-18 18:13:58.917185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.339 [2024-11-18 18:13:58.917214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:40.339 [2024-11-18 18:13:58.931250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e6300 00:15:40.339 [2024-11-18 18:13:58.931660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.339 [2024-11-18 18:13:58.931690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:58.946394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e5ec8 00:15:40.598 [2024-11-18 18:13:58.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:58.946838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:58.960689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e5a90 00:15:40.598 [2024-11-18 18:13:58.961060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:58.961090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:58.974932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e5658 00:15:40.598 [2024-11-18 18:13:58.975306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:58.975334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:58.989064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e5220 00:15:40.598 [2024-11-18 18:13:58.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:58.989457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:59.003210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e4de8 00:15:40.598 [2024-11-18 18:13:59.003566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:59.003605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:59.017251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e49b0 00:15:40.598 [2024-11-18 18:13:59.017636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:59.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:59.031404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e4578 00:15:40.598 [2024-11-18 18:13:59.031748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:59.031769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:59.045691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e4140 00:15:40.598 [2024-11-18 18:13:59.046025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.598 [2024-11-18 18:13:59.046056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:40.598 [2024-11-18 18:13:59.059965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e3d08 00:15:40.598 [2024-11-18 18:13:59.060266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.060295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.074023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e38d0 00:15:40.599 [2024-11-18 18:13:59.074354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.074384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.088682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e3498 00:15:40.599 [2024-11-18 18:13:59.088974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.089003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.103128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e3060 00:15:40.599 [2024-11-18 18:13:59.103403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.103444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.117320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e2c28 00:15:40.599 [2024-11-18 18:13:59.117610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.131725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e27f0 00:15:40.599 [2024-11-18 18:13:59.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.132006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.145967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e23b8 00:15:40.599 [2024-11-18 18:13:59.146258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.146280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.162301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e1f80 00:15:40.599 [2024-11-18 18:13:59.162528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.162562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.178651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e1b48 00:15:40.599 [2024-11-18 18:13:59.178858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:40.599 [2024-11-18 18:13:59.195084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e1710 00:15:40.599 [2024-11-18 18:13:59.195288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.599 [2024-11-18 18:13:59.195308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.211820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e12d8 00:15:40.859 [2024-11-18 18:13:59.212054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.212074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.227901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e0ea0 00:15:40.859 [2024-11-18 18:13:59.228118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.228138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.243203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e0a68 00:15:40.859 [2024-11-18 18:13:59.243383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.243403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.257536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e0630 00:15:40.859 [2024-11-18 18:13:59.257715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.257735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.271860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190e01f8 00:15:40.859 [2024-11-18 18:13:59.272009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.272029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.285986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190dfdc0 00:15:40.859 [2024-11-18 18:13:59.286135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.286155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.300399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190df988 00:15:40.859 [2024-11-18 18:13:59.300538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.300573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.314651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190df550 00:15:40.859 [2024-11-18 18:13:59.314767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.314787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.329034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190df118 00:15:40.859 [2024-11-18 18:13:59.329162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.329192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.344367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190dece0 00:15:40.859 [2024-11-18 18:13:59.344482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.344504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.359280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190de8a8 00:15:40.859 [2024-11-18 18:13:59.359391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.359412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.374051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190de038 00:15:40.859 [2024-11-18 18:13:59.374132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.374152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.396639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190de038 00:15:40.859 [2024-11-18 18:13:59.398079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.398128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.412125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190de470 00:15:40.859 [2024-11-18 18:13:59.413521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.428717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190de8a8 00:15:40.859 [2024-11-18 18:13:59.430086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.430134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.444412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190dece0 00:15:40.859 [2024-11-18 18:13:59.445763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:40.859 [2024-11-18 18:13:59.445809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:40.859 [2024-11-18 18:13:59.460510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190df118 00:15:41.118 [2024-11-18 18:13:59.462034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:41.118 [2024-11-18 18:13:59.462101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:41.118 [2024-11-18 18:13:59.476183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26dc0) with pdu=0x2000190df550 00:15:41.118 [2024-11-18 18:13:59.477527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:41.118 [2024-11-18 18:13:59.477598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:41.118 00:15:41.118 Latency(us) 00:15:41.118 [2024-11-18T18:13:59.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.118 [2024-11-18T18:13:59.722Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.118 nvme0n1 : 2.01 16946.48 66.20 0.00 0.00 7547.01 6017.40 22163.08 00:15:41.118 [2024-11-18T18:13:59.722Z] =================================================================================================================== 00:15:41.118 [2024-11-18T18:13:59.722Z] Total : 16946.48 66.20 0.00 0.00 7547.01 6017.40 22163.08 00:15:41.118 0 00:15:41.118 18:13:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:41.118 18:13:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:41.118 18:13:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:41.118 18:13:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:41.118 | .driver_specific 00:15:41.118 | .nvme_error 00:15:41.118 | .status_code 00:15:41.118 | .command_transient_transport_error' 00:15:41.378 18:13:59 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:15:41.378 18:13:59 -- host/digest.sh@73 -- # killprocess 72052 00:15:41.378 18:13:59 -- common/autotest_common.sh@936 -- # '[' -z 72052 ']' 00:15:41.378 18:13:59 -- common/autotest_common.sh@940 -- # kill -0 72052 00:15:41.378 18:13:59 -- common/autotest_common.sh@941 -- # uname 00:15:41.378 18:13:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.378 18:13:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72052 00:15:41.378 18:13:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:41.378 killing process with pid 72052 00:15:41.378 18:13:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:41.378 18:13:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72052' 00:15:41.378 Received shutdown signal, test time was about 2.000000 seconds 00:15:41.378 00:15:41.378 Latency(us) 00:15:41.378 [2024-11-18T18:13:59.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.378 [2024-11-18T18:13:59.982Z] =================================================================================================================== 00:15:41.378 [2024-11-18T18:13:59.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.378 18:13:59 -- common/autotest_common.sh@955 -- # kill 72052 00:15:41.378 18:13:59 -- common/autotest_common.sh@960 -- # wait 72052 00:15:41.637 18:14:00 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:15:41.637 18:14:00 -- host/digest.sh@54 -- # local rw bs qd 00:15:41.637 18:14:00 -- host/digest.sh@56 -- # rw=randwrite 00:15:41.637 18:14:00 -- host/digest.sh@56 -- # bs=131072 00:15:41.637 18:14:00 -- host/digest.sh@56 -- # qd=16 00:15:41.637 18:14:00 -- host/digest.sh@58 -- # bperfpid=72112 00:15:41.638 18:14:00 -- host/digest.sh@60 -- # waitforlisten 72112 /var/tmp/bperf.sock 00:15:41.638 18:14:00 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:15:41.638 18:14:00 -- common/autotest_common.sh@829 -- # '[' -z 72112 ']' 00:15:41.638 18:14:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:41.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:41.638 18:14:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.638 18:14:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:41.638 18:14:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.638 18:14:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.638 [2024-11-18 18:14:00.061428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:41.638 [2024-11-18 18:14:00.061574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72112 ] 00:15:41.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:41.638 Zero copy mechanism will not be used. 00:15:41.638 [2024-11-18 18:14:00.198480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.896 [2024-11-18 18:14:00.250157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.832 18:14:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.832 18:14:01 -- common/autotest_common.sh@862 -- # return 0 00:15:42.832 18:14:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:42.832 18:14:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:15:42.832 18:14:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:15:42.832 18:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.832 18:14:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.832 18:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.832 18:14:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:42.832 18:14:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:43.090 nvme0n1 00:15:43.349 18:14:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:15:43.349 18:14:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.349 18:14:01 -- common/autotest_common.sh@10 -- # set +x 00:15:43.349 18:14:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.349 18:14:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:15:43.349 18:14:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:43.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:43.349 Zero copy mechanism will not be used. 00:15:43.349 Running I/O for 2 seconds... 00:15:43.349 [2024-11-18 18:14:01.805887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.349 [2024-11-18 18:14:01.806274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.349 [2024-11-18 18:14:01.806323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.811129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.811472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.811516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.816189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.816528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.816584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.821301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.821646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.821695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.826415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.826791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.826824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.831291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.831634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.831666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.836367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.836716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.836749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.841251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.841602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.841650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.846280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.846706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.846739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.851393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.851746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.851778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.856512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.856887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.861589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.861922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.861954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.866645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.866972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.867003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.871574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.871910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.871941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.876622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.877017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.877064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.881684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.882025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.882056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.886747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.887075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.887107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.891600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.891943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.891975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.896459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.896832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.896866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.350 [2024-11-18 18:14:01.901447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.350 [2024-11-18 18:14:01.901801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.350 [2024-11-18 18:14:01.901833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.906318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.906725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.906758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.911640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.911954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.911978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.916616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.916974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.917009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.921644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.921995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.926680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.926999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.931359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.931699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.931731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.936430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.936813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.936844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.941577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.941921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.941952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.351 [2024-11-18 18:14:01.946347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.351 [2024-11-18 18:14:01.946727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.351 [2024-11-18 18:14:01.946759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.612 [2024-11-18 18:14:01.951666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.612 [2024-11-18 18:14:01.952035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.952066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.956862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.957215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.957245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.961648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.961979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.962010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.966523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.966900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.966930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.971438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.971781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.971812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.976246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.976573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.976621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.981074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.981393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.981424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.985960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.986309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.986342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.991170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.991515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.991575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:01.996364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:01.996751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:01.996784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.001724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.002096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.002127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.007110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.007447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.007478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.012276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.012665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.012711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.017261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.017660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.017692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.022548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.022914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.022946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.027565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.027890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.027921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.032431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.032828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.032862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.037341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.037709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.037742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.042312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.042720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.042752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.047403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.047753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.047784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.052220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.052557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.052598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.057056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.057390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.057412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.062273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.062628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.067145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.067474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.067505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.072043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.072394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.072444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.077021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.077357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.077388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.081878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.613 [2024-11-18 18:14:02.082274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.613 [2024-11-18 18:14:02.082307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.613 [2024-11-18 18:14:02.086995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.087313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.087345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.091877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.092195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.092226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.096719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.097053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.097085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.101757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.102110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.102141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.106770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.107107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.107138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.111620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.111989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.112022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.116666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.116988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.117019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.121550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.121906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.121938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.126548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.126913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.126944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.131373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.131720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.136212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.136543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.136583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.141367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.141738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.146462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.146895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.151404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.151730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.151793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.156277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.156548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.156601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.161071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.161347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.161373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.165854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.166128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.166153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.170683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.170959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.175338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.175867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.175914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.180466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.180823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.185252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.185525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.185558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.189908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.190180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.190244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.194715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.194987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.195012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.199399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.199742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.199772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.204270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.204544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.204581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.614 [2024-11-18 18:14:02.209045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.614 [2024-11-18 18:14:02.209359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.614 [2024-11-18 18:14:02.209401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.214145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.214682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.214714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.219295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.224208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.224484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.224509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.228961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.229243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.229269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.233708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.233985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.234010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.238445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.238993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.239025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.243597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.243897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.875 [2024-11-18 18:14:02.243922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.875 [2024-11-18 18:14:02.248644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.875 [2024-11-18 18:14:02.248956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.249013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.253597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.253889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.258173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.258749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.258780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.263140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.263417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.263449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.267945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.268295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.268333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.272720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.273020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.273048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.277394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.277716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.277757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.282129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.282719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.282751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.287703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.287995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.288021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.292357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.292677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.292711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.297072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.297346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.297372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.301813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.302127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.306503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.306897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.306928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.311338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.311685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.316095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.316367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.316392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.320813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.321094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.321119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.325454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.326019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.326081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.330671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.330947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.330973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.335323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.335624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.335650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.340171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.340444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.344960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.345232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.345257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.349601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.349932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.349969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.354399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.354729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.359140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.359415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.359440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.363913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.364185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.364210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.368672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.368951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.368976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.373298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.373819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.876 [2024-11-18 18:14:02.373851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.876 [2024-11-18 18:14:02.378323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.876 [2024-11-18 18:14:02.378671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.378696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.383050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.383324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.387749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.388023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.392363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.392666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.392692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.397071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.397342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.397367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.401731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.402029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.406273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.406598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.406648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.411039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.411310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.411335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.415735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.416012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.416037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.420393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.420703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.420730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.425292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.425789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.425836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.430388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.430752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.430783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.435650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.436006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.436033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.440967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.441290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.446439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.446759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.446792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.451532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.451908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.451956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.456818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.457160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.457185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.462023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.462360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.462389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.467160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.467431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.467456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:43.877 [2024-11-18 18:14:02.472484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:43.877 [2024-11-18 18:14:02.472865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:43.877 [2024-11-18 18:14:02.472898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.137 [2024-11-18 18:14:02.478148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.137 [2024-11-18 18:14:02.478495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.137 [2024-11-18 18:14:02.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.137 [2024-11-18 18:14:02.483432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.137 [2024-11-18 18:14:02.483806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.137 [2024-11-18 18:14:02.483839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.137 [2024-11-18 18:14:02.489063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.137 [2024-11-18 18:14:02.489399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.137 [2024-11-18 18:14:02.489426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.137 [2024-11-18 18:14:02.494246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.137 [2024-11-18 18:14:02.494571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.137 [2024-11-18 18:14:02.494599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.499366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.499704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.499737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.504576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.504935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.504987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.509641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.509999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.510024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.514660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.514964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.514989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.519325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.519635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.519663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.524104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.524398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.524424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.528884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.529164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.529191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.533629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.533917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.533957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.538392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.538750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.538781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.543324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.543629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.543666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.548235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.548505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.548539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.553156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.553429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.553454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.557967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.558283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.558310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.562769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.563066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.563086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.567484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.567833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.567864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.572280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.572554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.572588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.577125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.577638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.577695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.582194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.582539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.582577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.587092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.587371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.587396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.591798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.592073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.592098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.596394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.596716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.596742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.601193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.601727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.601758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.606428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.606777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.606854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.611518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.611950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.611998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.617202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.617715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.622763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.138 [2024-11-18 18:14:02.623154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.138 [2024-11-18 18:14:02.623175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.138 [2024-11-18 18:14:02.628468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.628854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.628887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.634271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.634607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.634637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.639641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.639991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.640016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.645087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.645531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.645604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.651053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.651371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.656282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.656823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.662273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.662598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.662627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.667799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.668146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.668171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.673210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.673480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.673505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.678605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.678950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.678976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.683974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.684260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.684301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.689125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.689399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.689441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.694163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.694545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.699365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.704452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.704824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.704873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.709390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.709685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.709710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.714140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.714476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.714503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.719145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.724190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.724486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.724512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.729142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.729421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.729446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.139 [2024-11-18 18:14:02.734268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.139 [2024-11-18 18:14:02.734652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.139 [2024-11-18 18:14:02.734679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.739695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.739980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.740021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.744646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.744971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.744998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.749444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.749798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.749829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.754313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.754801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.754847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.759298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.759739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.759788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.764251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.764567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.764610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.769147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.769465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.769496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.774033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.774378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.774411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.779160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.779478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.779509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.784195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.784506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.784559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.789167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.789479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.789510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.793940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.794293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.794324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.798698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.799046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.799077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.803365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.803690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.803719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.808044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.808353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.808374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.812975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.813283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.813324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.817864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.818176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.822577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.822924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.822953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.827230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.827540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.827596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.831971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.400 [2024-11-18 18:14:02.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.400 [2024-11-18 18:14:02.832311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.400 [2024-11-18 18:14:02.836662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.836973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.837002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.841300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.841621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.841666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.846001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.846356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.846388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.850700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.851015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.851044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.855450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.855796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.855827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.860158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.860468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.860499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.864855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.865163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.865193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.869700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.870032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.870062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.874735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.875077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.879736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.880099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.880129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.884806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.885110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.885141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.889618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.889921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.889953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.894445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.894803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.894833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.899224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.899591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.899648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.904395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.904748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.904778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.909387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.909747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.909780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.914721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.915077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.915124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.919980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.920336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.925420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.925800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.925833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.930962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.931286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.931318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.936277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.936627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.936676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.941492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.941867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.941900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.946823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.947179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.947210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.952158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.952491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.952522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.956988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.957324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.957355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.961709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.962056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.962087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.966850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.401 [2024-11-18 18:14:02.967202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.401 [2024-11-18 18:14:02.967249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.401 [2024-11-18 18:14:02.971666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.972007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.972048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.402 [2024-11-18 18:14:02.976553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.976903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.976935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.402 [2024-11-18 18:14:02.981638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.981972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.982003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.402 [2024-11-18 18:14:02.986389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.986779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.986810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.402 [2024-11-18 18:14:02.991305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.402 [2024-11-18 18:14:02.996463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.402 [2024-11-18 18:14:02.996843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.402 [2024-11-18 18:14:02.996875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.001593] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.001989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.006660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.007012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.007044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.011587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.011923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.011955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.016309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.016656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.016688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.021362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.021729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.021760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.026129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.026488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.026521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.031075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.031400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.031431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.036060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.036397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.036428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.040884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.041222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.041252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.045719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.046091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.050728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.662 [2024-11-18 18:14:03.051081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.662 [2024-11-18 18:14:03.051112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.662 [2024-11-18 18:14:03.055480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.055841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.055871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.060389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.060710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.060751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.065221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.065547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.065568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.070059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.070411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.070443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.075177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.075514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.075571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.080139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.080467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.080497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.085266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.085602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.085651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.090174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.090530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.090571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.095044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.095370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.095410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.099980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.100309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.100340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.104775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.105101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.105131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.109438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.109781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.109811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.114355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.114701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.114731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.119118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.119447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.119477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.123929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.124247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.124277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.128703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.129031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.129062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.133721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.134079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.138755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.139090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.139123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.143657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.144016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.144059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.148764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.149138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.154125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.154485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.154518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.159440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.159821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.159854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.164696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.165095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.165125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.170040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.170399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.170431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.175107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.175443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.175474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.180174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.180502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.180542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.663 [2024-11-18 18:14:03.184960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.663 [2024-11-18 18:14:03.185296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.663 [2024-11-18 18:14:03.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.189824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.190159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.190190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.194924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.195242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.195273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.199772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.200154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.204786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.205171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.209759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.210108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.210139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.214643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.214966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.214997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.219520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.219862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.219893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.224351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.224683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.224713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.229190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.229524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.229565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.234103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.234469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.239057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.239390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.239421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.243771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.244213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.244262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.248868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.249279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.249331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.253873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.254225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.254277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.664 [2024-11-18 18:14:03.258994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.664 [2024-11-18 18:14:03.259400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.664 [2024-11-18 18:14:03.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.264282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.264594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.264636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.269260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.269618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.269661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.274229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.274585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.274631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.279163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.279498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.279541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.284117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.284452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.284483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.289161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.289480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.289512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.293981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.294384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.294417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.924 [2024-11-18 18:14:03.298965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.924 [2024-11-18 18:14:03.299301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.924 [2024-11-18 18:14:03.299333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.304135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.304467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.304499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.309116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.309444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.309474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.313839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.314164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.314194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.318678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.319046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.319077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.323652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.324000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.324030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.328483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.328858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.328891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.333218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.333588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.337952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.338308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.338346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.342911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.343237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.343267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.347645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.347992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.348023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.352464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.352821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.352852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.357296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.357653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.357684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.362085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.362449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.362481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.366975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.367308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.367339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.371845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.372215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.376637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.376940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.376970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.381333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.381693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.381724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.386103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.386475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.386508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.391312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.391660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.391704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.396392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.396697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.396758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.401153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.401483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.401513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.405939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.406294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.410981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.411303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.411333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.415758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.416104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.416134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.420586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.420896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.420926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.425290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.425646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.425676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.925 [2024-11-18 18:14:03.430262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.925 [2024-11-18 18:14:03.430622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.925 [2024-11-18 18:14:03.430651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.435124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.435448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.435478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.439952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.440287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.440319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.444869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.445188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.445219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.450245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.450566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.450598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.455514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.455889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.455935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.460894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.461269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.461302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.466454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.466774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.466807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.471721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.472105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.472136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.477006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.477337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.477368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.482384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.482714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.482746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.487555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.487921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.487968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.492880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.493290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.498190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.498522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.503488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.503858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.503891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.508851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.509203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.509235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.514427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.514748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.514787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.519772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:44.926 [2024-11-18 18:14:03.520104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:44.926 [2024-11-18 18:14:03.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:44.926 [2024-11-18 18:14:03.525291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.525643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.525692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.530640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.531001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.531033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.535617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.535943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.535973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.540370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.540722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.545161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.545495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.545526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.549907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.550272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.550304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.554757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.555115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.187 [2024-11-18 18:14:03.559576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.187 [2024-11-18 18:14:03.559914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.187 [2024-11-18 18:14:03.559945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.564368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.564722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.564753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.569246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.569614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.573985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.574338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.574371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.578819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.579203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.583629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.583969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.584000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.588337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.588688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.588719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.593145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.593484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.593515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.597842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.598179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.598251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.602794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.603147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.603178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.607586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.607926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.607956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.612418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.612766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.612797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.617336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.617690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.617722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.622139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.622523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.622567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.627116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.627449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.627480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.631932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.632278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.632309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.636777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.637116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.637147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.641583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.641919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.641950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.646304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.646681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.651528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.651905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.651938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.656472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.656819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.656851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.661275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.661649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.661680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.666095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.666476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.671102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.671438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.671469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.676096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.676432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.676463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.681010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.681346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.681377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.685946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.686305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.686338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.188 [2024-11-18 18:14:03.690998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.188 [2024-11-18 18:14:03.691334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.188 [2024-11-18 18:14:03.691365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.695910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.696257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.696288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.700805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.701131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.701163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.705750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.706085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.706116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.710659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.710993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.711025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.715409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.715783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.715815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.720306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.720654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.720685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.725167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.725508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.725550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.730025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.730406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.730439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.734954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.735388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.735435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.739916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.740355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.744868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.745211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.745245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.749705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.750043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.750074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.754615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.754949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.754980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.759449] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.759821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.759853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.764263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.764596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.764639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.769291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.769631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.769662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.774069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.774460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.774493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.778918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.779253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.779284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.189 [2024-11-18 18:14:03.784051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.189 [2024-11-18 18:14:03.784412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.189 [2024-11-18 18:14:03.784444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.449 [2024-11-18 18:14:03.789253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.449 [2024-11-18 18:14:03.789576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.449 [2024-11-18 18:14:03.789616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.449 [2024-11-18 18:14:03.794272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.449 [2024-11-18 18:14:03.794656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.449 [2024-11-18 18:14:03.794689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.449 [2024-11-18 18:14:03.799209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd26f60) with pdu=0x2000190fef90 00:15:45.449 [2024-11-18 18:14:03.799527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.449 [2024-11-18 18:14:03.799582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.449 00:15:45.449 Latency(us) 00:15:45.449 [2024-11-18T18:14:04.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.449 [2024-11-18T18:14:04.053Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:15:45.449 nvme0n1 : 2.00 6231.14 778.89 0.00 0.00 2562.47 1995.87 5749.29 00:15:45.449 [2024-11-18T18:14:04.053Z] =================================================================================================================== 00:15:45.449 [2024-11-18T18:14:04.053Z] Total : 6231.14 778.89 0.00 0.00 2562.47 1995.87 5749.29 00:15:45.449 0 00:15:45.449 18:14:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:15:45.449 18:14:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:15:45.449 18:14:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:15:45.449 | .driver_specific 00:15:45.449 | .nvme_error 00:15:45.449 | .status_code 00:15:45.449 | .command_transient_transport_error' 00:15:45.449 18:14:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:15:45.708 18:14:04 -- host/digest.sh@71 -- # (( 402 > 0 )) 00:15:45.708 18:14:04 -- host/digest.sh@73 -- # killprocess 72112 00:15:45.708 18:14:04 -- common/autotest_common.sh@936 -- # '[' -z 72112 ']' 00:15:45.708 18:14:04 -- common/autotest_common.sh@940 -- # kill -0 72112 00:15:45.708 18:14:04 -- common/autotest_common.sh@941 -- # uname 00:15:45.708 18:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.708 18:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72112 00:15:45.708 18:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:45.708 killing process with pid 72112 00:15:45.708 Received shutdown signal, test time was about 2.000000 seconds 00:15:45.708 00:15:45.708 Latency(us) 00:15:45.708 [2024-11-18T18:14:04.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.708 [2024-11-18T18:14:04.312Z] =================================================================================================================== 00:15:45.708 [2024-11-18T18:14:04.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.708 18:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:45.708 18:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72112' 00:15:45.708 18:14:04 -- common/autotest_common.sh@955 -- # kill 72112 00:15:45.708 18:14:04 -- common/autotest_common.sh@960 -- # wait 72112 00:15:45.708 18:14:04 -- host/digest.sh@115 -- # killprocess 71905 00:15:45.708 18:14:04 -- common/autotest_common.sh@936 -- # '[' -z 71905 ']' 00:15:45.708 18:14:04 -- common/autotest_common.sh@940 -- # kill -0 71905 00:15:45.708 18:14:04 -- common/autotest_common.sh@941 -- # uname 00:15:45.708 18:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.708 18:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71905 00:15:45.967 18:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.967 killing process with pid 71905 00:15:45.967 18:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.967 18:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71905' 00:15:45.967 18:14:04 -- common/autotest_common.sh@955 -- # kill 71905 00:15:45.967 18:14:04 -- common/autotest_common.sh@960 -- # wait 71905 00:15:45.967 00:15:45.967 real 0m18.310s 00:15:45.967 user 0m35.878s 00:15:45.967 sys 0m4.495s 00:15:45.967 18:14:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:45.967 18:14:04 -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 ************************************ 00:15:45.967 END TEST nvmf_digest_error 00:15:45.967 ************************************ 00:15:45.967 18:14:04 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:15:45.967 18:14:04 -- host/digest.sh@139 -- # nvmftestfini 00:15:45.967 18:14:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.968 18:14:04 -- nvmf/common.sh@116 -- # sync 00:15:46.227 18:14:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:46.227 18:14:04 -- nvmf/common.sh@119 -- # set +e 00:15:46.227 18:14:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:46.227 18:14:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:46.227 rmmod nvme_tcp 00:15:46.227 rmmod nvme_fabrics 00:15:46.227 rmmod nvme_keyring 00:15:46.227 18:14:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:46.227 18:14:04 -- nvmf/common.sh@123 -- # set -e 00:15:46.227 18:14:04 -- nvmf/common.sh@124 -- # return 0 00:15:46.227 18:14:04 -- nvmf/common.sh@477 -- # '[' -n 71905 ']' 00:15:46.227 18:14:04 -- nvmf/common.sh@478 -- # killprocess 71905 00:15:46.227 18:14:04 -- common/autotest_common.sh@936 -- # '[' -z 71905 ']' 00:15:46.227 18:14:04 -- common/autotest_common.sh@940 -- # kill -0 71905 00:15:46.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71905) - No such process 00:15:46.227 Process with pid 71905 is not found 00:15:46.227 18:14:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71905 is not found' 00:15:46.227 18:14:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:46.227 18:14:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:46.227 18:14:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:46.227 18:14:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.227 18:14:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:46.227 18:14:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.227 18:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.227 18:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.227 18:14:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:46.227 00:15:46.227 real 0m35.067s 00:15:46.227 user 1m7.347s 00:15:46.227 sys 0m9.054s 00:15:46.227 18:14:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:46.227 18:14:04 -- common/autotest_common.sh@10 -- # set +x 00:15:46.227 ************************************ 00:15:46.227 END TEST nvmf_digest 00:15:46.227 ************************************ 00:15:46.227 18:14:04 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:15:46.227 18:14:04 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:15:46.227 18:14:04 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:46.227 18:14:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:46.227 18:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:46.227 18:14:04 -- common/autotest_common.sh@10 -- # set +x 00:15:46.227 ************************************ 00:15:46.227 START TEST nvmf_multipath 00:15:46.227 ************************************ 00:15:46.227 18:14:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:15:46.227 * Looking for test storage... 00:15:46.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.227 18:14:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:46.227 18:14:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:46.227 18:14:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:46.487 18:14:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:46.487 18:14:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:46.487 18:14:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:46.487 18:14:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:46.487 18:14:04 -- scripts/common.sh@335 -- # IFS=.-: 00:15:46.487 18:14:04 -- scripts/common.sh@335 -- # read -ra ver1 00:15:46.487 18:14:04 -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.487 18:14:04 -- scripts/common.sh@336 -- # read -ra ver2 00:15:46.487 18:14:04 -- scripts/common.sh@337 -- # local 'op=<' 00:15:46.487 18:14:04 -- scripts/common.sh@339 -- # ver1_l=2 00:15:46.487 18:14:04 -- scripts/common.sh@340 -- # ver2_l=1 00:15:46.487 18:14:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:46.487 18:14:04 -- scripts/common.sh@343 -- # case "$op" in 00:15:46.487 18:14:04 -- scripts/common.sh@344 -- # : 1 00:15:46.487 18:14:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:46.487 18:14:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.487 18:14:04 -- scripts/common.sh@364 -- # decimal 1 00:15:46.487 18:14:04 -- scripts/common.sh@352 -- # local d=1 00:15:46.487 18:14:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.487 18:14:04 -- scripts/common.sh@354 -- # echo 1 00:15:46.487 18:14:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:46.487 18:14:04 -- scripts/common.sh@365 -- # decimal 2 00:15:46.487 18:14:04 -- scripts/common.sh@352 -- # local d=2 00:15:46.487 18:14:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.487 18:14:04 -- scripts/common.sh@354 -- # echo 2 00:15:46.487 18:14:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:46.487 18:14:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:46.487 18:14:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:46.487 18:14:04 -- scripts/common.sh@367 -- # return 0 00:15:46.487 18:14:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.487 18:14:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.487 --rc genhtml_branch_coverage=1 00:15:46.487 --rc genhtml_function_coverage=1 00:15:46.487 --rc genhtml_legend=1 00:15:46.487 --rc geninfo_all_blocks=1 00:15:46.487 --rc geninfo_unexecuted_blocks=1 00:15:46.487 00:15:46.487 ' 00:15:46.487 18:14:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.487 --rc genhtml_branch_coverage=1 00:15:46.487 --rc genhtml_function_coverage=1 00:15:46.487 --rc genhtml_legend=1 00:15:46.487 --rc geninfo_all_blocks=1 00:15:46.487 --rc geninfo_unexecuted_blocks=1 00:15:46.487 00:15:46.487 ' 00:15:46.487 18:14:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.487 --rc genhtml_branch_coverage=1 00:15:46.487 --rc genhtml_function_coverage=1 00:15:46.487 --rc genhtml_legend=1 00:15:46.487 --rc geninfo_all_blocks=1 00:15:46.487 --rc geninfo_unexecuted_blocks=1 00:15:46.487 00:15:46.487 ' 00:15:46.487 18:14:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:46.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.487 --rc genhtml_branch_coverage=1 00:15:46.487 --rc genhtml_function_coverage=1 00:15:46.487 --rc genhtml_legend=1 00:15:46.487 --rc geninfo_all_blocks=1 00:15:46.487 --rc geninfo_unexecuted_blocks=1 00:15:46.487 00:15:46.487 ' 00:15:46.487 18:14:04 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.488 18:14:04 -- nvmf/common.sh@7 -- # uname -s 00:15:46.488 18:14:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.488 18:14:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.488 18:14:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.488 18:14:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.488 18:14:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.488 18:14:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.488 18:14:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.488 18:14:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.488 18:14:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.488 18:14:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:15:46.488 18:14:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:15:46.488 18:14:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.488 18:14:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.488 18:14:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.488 18:14:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.488 18:14:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.488 18:14:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.488 18:14:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.488 18:14:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.488 18:14:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.488 18:14:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.488 18:14:04 -- paths/export.sh@5 -- # export PATH 00:15:46.488 18:14:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.488 18:14:04 -- nvmf/common.sh@46 -- # : 0 00:15:46.488 18:14:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:46.488 18:14:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:46.488 18:14:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:46.488 18:14:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.488 18:14:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.488 18:14:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:46.488 18:14:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:46.488 18:14:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:46.488 18:14:04 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.488 18:14:04 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.488 18:14:04 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.488 18:14:04 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:46.488 18:14:04 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:46.488 18:14:04 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:46.488 18:14:04 -- host/multipath.sh@30 -- # nvmftestinit 00:15:46.488 18:14:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:46.488 18:14:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.488 18:14:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:46.488 18:14:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:46.488 18:14:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:46.488 18:14:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.488 18:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.488 18:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.488 18:14:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:46.488 18:14:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:46.488 18:14:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.488 18:14:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.488 18:14:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.488 18:14:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:46.488 18:14:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.488 18:14:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.488 18:14:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.488 18:14:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.488 18:14:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.488 18:14:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.488 18:14:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.488 18:14:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.488 18:14:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:46.488 18:14:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:46.488 Cannot find device "nvmf_tgt_br" 00:15:46.488 18:14:04 -- nvmf/common.sh@154 -- # true 00:15:46.488 18:14:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.488 Cannot find device "nvmf_tgt_br2" 00:15:46.488 18:14:04 -- nvmf/common.sh@155 -- # true 00:15:46.488 18:14:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:46.488 18:14:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:46.488 Cannot find device "nvmf_tgt_br" 00:15:46.488 18:14:04 -- nvmf/common.sh@157 -- # true 00:15:46.488 18:14:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:46.488 Cannot find device "nvmf_tgt_br2" 00:15:46.488 18:14:04 -- nvmf/common.sh@158 -- # true 00:15:46.488 18:14:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:46.488 18:14:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:46.488 18:14:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.488 18:14:05 -- nvmf/common.sh@161 -- # true 00:15:46.488 18:14:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.488 18:14:05 -- nvmf/common.sh@162 -- # true 00:15:46.488 18:14:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.488 18:14:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.748 18:14:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.748 18:14:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.748 18:14:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.748 18:14:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.748 18:14:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.748 18:14:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.748 18:14:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.748 18:14:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:46.748 18:14:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:46.748 18:14:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:46.748 18:14:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:46.748 18:14:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.748 18:14:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.748 18:14:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.748 18:14:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:46.748 18:14:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:46.748 18:14:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.748 18:14:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.748 18:14:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.748 18:14:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.748 18:14:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.748 18:14:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:46.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:46.748 00:15:46.748 --- 10.0.0.2 ping statistics --- 00:15:46.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.748 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:46.748 18:14:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:46.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:15:46.748 00:15:46.748 --- 10.0.0.3 ping statistics --- 00:15:46.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.748 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:46.748 18:14:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:15:46.748 00:15:46.748 --- 10.0.0.1 ping statistics --- 00:15:46.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.748 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:46.748 18:14:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.748 18:14:05 -- nvmf/common.sh@421 -- # return 0 00:15:46.748 18:14:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.748 18:14:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.748 18:14:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:46.748 18:14:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:46.748 18:14:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.748 18:14:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:46.748 18:14:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:46.748 18:14:05 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:15:46.748 18:14:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.748 18:14:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.748 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.748 18:14:05 -- nvmf/common.sh@469 -- # nvmfpid=72396 00:15:46.748 18:14:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:46.748 18:14:05 -- nvmf/common.sh@470 -- # waitforlisten 72396 00:15:46.748 18:14:05 -- common/autotest_common.sh@829 -- # '[' -z 72396 ']' 00:15:46.748 18:14:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.748 18:14:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.748 18:14:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.748 18:14:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.748 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.748 [2024-11-18 18:14:05.325523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.748 [2024-11-18 18:14:05.325656] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.008 [2024-11-18 18:14:05.463526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.008 [2024-11-18 18:14:05.521515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.008 [2024-11-18 18:14:05.521696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.008 [2024-11-18 18:14:05.521712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.008 [2024-11-18 18:14:05.521721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.008 [2024-11-18 18:14:05.521879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.008 [2024-11-18 18:14:05.521925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.945 18:14:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.945 18:14:06 -- common/autotest_common.sh@862 -- # return 0 00:15:47.945 18:14:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.945 18:14:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.945 18:14:06 -- common/autotest_common.sh@10 -- # set +x 00:15:47.945 18:14:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.945 18:14:06 -- host/multipath.sh@33 -- # nvmfapp_pid=72396 00:15:47.945 18:14:06 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:47.945 [2024-11-18 18:14:06.501643] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.945 18:14:06 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:48.205 Malloc0 00:15:48.205 18:14:06 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:48.464 18:14:07 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:48.723 18:14:07 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.982 [2024-11-18 18:14:07.471897] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.982 18:14:07 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:49.242 [2024-11-18 18:14:07.704039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:49.242 18:14:07 -- host/multipath.sh@44 -- # bdevperf_pid=72453 00:15:49.242 18:14:07 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:49.242 18:14:07 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:49.242 18:14:07 -- host/multipath.sh@47 -- # waitforlisten 72453 /var/tmp/bdevperf.sock 00:15:49.242 18:14:07 -- common/autotest_common.sh@829 -- # '[' -z 72453 ']' 00:15:49.242 18:14:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.242 18:14:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.242 18:14:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.242 18:14:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.242 18:14:07 -- common/autotest_common.sh@10 -- # set +x 00:15:50.179 18:14:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.179 18:14:08 -- common/autotest_common.sh@862 -- # return 0 00:15:50.179 18:14:08 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:50.445 18:14:09 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:50.711 Nvme0n1 00:15:50.968 18:14:09 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:51.227 Nvme0n1 00:15:51.227 18:14:09 -- host/multipath.sh@78 -- # sleep 1 00:15:51.227 18:14:09 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:52.164 18:14:10 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:15:52.164 18:14:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:52.423 18:14:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:52.682 18:14:11 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:15:52.682 18:14:11 -- host/multipath.sh@65 -- # dtrace_pid=72498 00:15:52.682 18:14:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:52.682 18:14:11 -- host/multipath.sh@66 -- # sleep 6 00:15:59.250 18:14:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:15:59.250 18:14:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:15:59.250 18:14:17 -- host/multipath.sh@67 -- # active_port=4421 00:15:59.250 18:14:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:59.250 Attaching 4 probes... 00:15:59.250 @path[10.0.0.2, 4421]: 19932 00:15:59.250 @path[10.0.0.2, 4421]: 20422 00:15:59.250 @path[10.0.0.2, 4421]: 20237 00:15:59.250 @path[10.0.0.2, 4421]: 20191 00:15:59.250 @path[10.0.0.2, 4421]: 20070 00:15:59.250 18:14:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:15:59.250 18:14:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:15:59.250 18:14:17 -- host/multipath.sh@69 -- # sed -n 1p 00:15:59.250 18:14:17 -- host/multipath.sh@69 -- # port=4421 00:15:59.250 18:14:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:15:59.250 18:14:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:15:59.250 18:14:17 -- host/multipath.sh@72 -- # kill 72498 00:15:59.250 18:14:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:15:59.250 18:14:17 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:15:59.250 18:14:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:59.250 18:14:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:59.509 18:14:17 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:15:59.509 18:14:17 -- host/multipath.sh@65 -- # dtrace_pid=72618 00:15:59.509 18:14:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:15:59.509 18:14:17 -- host/multipath.sh@66 -- # sleep 6 00:16:06.121 18:14:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:06.121 18:14:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:06.121 18:14:24 -- host/multipath.sh@67 -- # active_port=4420 00:16:06.121 18:14:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:06.121 Attaching 4 probes... 00:16:06.121 @path[10.0.0.2, 4420]: 19788 00:16:06.121 @path[10.0.0.2, 4420]: 20098 00:16:06.121 @path[10.0.0.2, 4420]: 19835 00:16:06.121 @path[10.0.0.2, 4420]: 20351 00:16:06.121 @path[10.0.0.2, 4420]: 20126 00:16:06.121 18:14:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:06.121 18:14:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:06.121 18:14:24 -- host/multipath.sh@69 -- # sed -n 1p 00:16:06.121 18:14:24 -- host/multipath.sh@69 -- # port=4420 00:16:06.121 18:14:24 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:06.121 18:14:24 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:06.121 18:14:24 -- host/multipath.sh@72 -- # kill 72618 00:16:06.121 18:14:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:06.121 18:14:24 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:06.121 18:14:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:06.121 18:14:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:06.380 18:14:24 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:06.380 18:14:24 -- host/multipath.sh@65 -- # dtrace_pid=72726 00:16:06.380 18:14:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:06.380 18:14:24 -- host/multipath.sh@66 -- # sleep 6 00:16:12.947 18:14:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:12.947 18:14:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:12.947 18:14:31 -- host/multipath.sh@67 -- # active_port=4421 00:16:12.947 18:14:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:12.947 Attaching 4 probes... 00:16:12.947 @path[10.0.0.2, 4421]: 15337 00:16:12.947 @path[10.0.0.2, 4421]: 20008 00:16:12.947 @path[10.0.0.2, 4421]: 19784 00:16:12.947 @path[10.0.0.2, 4421]: 19520 00:16:12.947 @path[10.0.0.2, 4421]: 19796 00:16:12.947 18:14:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:12.947 18:14:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:12.947 18:14:31 -- host/multipath.sh@69 -- # sed -n 1p 00:16:12.947 18:14:31 -- host/multipath.sh@69 -- # port=4421 00:16:12.947 18:14:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:12.947 18:14:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:12.947 18:14:31 -- host/multipath.sh@72 -- # kill 72726 00:16:12.947 18:14:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:12.947 18:14:31 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:12.947 18:14:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:12.947 18:14:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:13.206 18:14:31 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:13.206 18:14:31 -- host/multipath.sh@65 -- # dtrace_pid=72845 00:16:13.206 18:14:31 -- host/multipath.sh@66 -- # sleep 6 00:16:13.206 18:14:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:19.775 18:14:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:19.775 18:14:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:16:19.775 18:14:37 -- host/multipath.sh@67 -- # active_port= 00:16:19.775 18:14:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:19.775 Attaching 4 probes... 00:16:19.775 00:16:19.775 00:16:19.775 00:16:19.775 00:16:19.775 00:16:19.775 18:14:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:19.775 18:14:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:19.775 18:14:37 -- host/multipath.sh@69 -- # sed -n 1p 00:16:19.775 18:14:37 -- host/multipath.sh@69 -- # port= 00:16:19.775 18:14:37 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:16:19.775 18:14:37 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:16:19.775 18:14:37 -- host/multipath.sh@72 -- # kill 72845 00:16:19.775 18:14:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:19.775 18:14:37 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:16:19.775 18:14:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:19.775 18:14:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:20.034 18:14:38 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:16:20.034 18:14:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:20.034 18:14:38 -- host/multipath.sh@65 -- # dtrace_pid=72963 00:16:20.034 18:14:38 -- host/multipath.sh@66 -- # sleep 6 00:16:26.617 18:14:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:26.617 18:14:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:26.617 18:14:44 -- host/multipath.sh@67 -- # active_port=4421 00:16:26.617 18:14:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:26.617 Attaching 4 probes... 00:16:26.617 @path[10.0.0.2, 4421]: 19066 00:16:26.617 @path[10.0.0.2, 4421]: 19398 00:16:26.617 @path[10.0.0.2, 4421]: 19330 00:16:26.617 @path[10.0.0.2, 4421]: 19938 00:16:26.617 @path[10.0.0.2, 4421]: 19508 00:16:26.617 18:14:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:26.617 18:14:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:26.617 18:14:44 -- host/multipath.sh@69 -- # sed -n 1p 00:16:26.617 18:14:44 -- host/multipath.sh@69 -- # port=4421 00:16:26.617 18:14:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:26.617 18:14:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:26.617 18:14:44 -- host/multipath.sh@72 -- # kill 72963 00:16:26.617 18:14:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:26.617 18:14:44 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:26.617 [2024-11-18 18:14:45.066728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 [2024-11-18 18:14:45.067325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be230 is same with the state(5) to be set 00:16:26.617 18:14:45 -- host/multipath.sh@101 -- # sleep 1 00:16:27.554 18:14:46 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:16:27.554 18:14:46 -- host/multipath.sh@65 -- # dtrace_pid=73086 00:16:27.554 18:14:46 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:27.554 18:14:46 -- host/multipath.sh@66 -- # sleep 6 00:16:34.121 18:14:52 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:34.121 18:14:52 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:34.121 18:14:52 -- host/multipath.sh@67 -- # active_port=4420 00:16:34.121 18:14:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:34.121 Attaching 4 probes... 00:16:34.121 @path[10.0.0.2, 4420]: 18863 00:16:34.121 @path[10.0.0.2, 4420]: 19491 00:16:34.121 @path[10.0.0.2, 4420]: 19401 00:16:34.121 @path[10.0.0.2, 4420]: 19257 00:16:34.121 @path[10.0.0.2, 4420]: 19425 00:16:34.121 18:14:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:34.121 18:14:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:34.121 18:14:52 -- host/multipath.sh@69 -- # sed -n 1p 00:16:34.121 18:14:52 -- host/multipath.sh@69 -- # port=4420 00:16:34.121 18:14:52 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:34.121 18:14:52 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:34.121 18:14:52 -- host/multipath.sh@72 -- # kill 73086 00:16:34.121 18:14:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:34.121 18:14:52 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:34.121 [2024-11-18 18:14:52.641310] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:34.121 18:14:52 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:34.381 18:14:52 -- host/multipath.sh@111 -- # sleep 6 00:16:40.950 18:14:58 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:16:40.950 18:14:58 -- host/multipath.sh@65 -- # dtrace_pid=73266 00:16:40.950 18:14:58 -- host/multipath.sh@66 -- # sleep 6 00:16:40.950 18:14:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72396 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:47.577 18:15:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:47.577 18:15:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:47.577 18:15:05 -- host/multipath.sh@67 -- # active_port=4421 00:16:47.577 18:15:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.577 Attaching 4 probes... 00:16:47.577 @path[10.0.0.2, 4421]: 18424 00:16:47.577 @path[10.0.0.2, 4421]: 17253 00:16:47.577 @path[10.0.0.2, 4421]: 18457 00:16:47.577 @path[10.0.0.2, 4421]: 18887 00:16:47.577 @path[10.0.0.2, 4421]: 19793 00:16:47.577 18:15:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:47.577 18:15:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:47.577 18:15:05 -- host/multipath.sh@69 -- # sed -n 1p 00:16:47.577 18:15:05 -- host/multipath.sh@69 -- # port=4421 00:16:47.577 18:15:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:47.577 18:15:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:47.577 18:15:05 -- host/multipath.sh@72 -- # kill 73266 00:16:47.577 18:15:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.577 18:15:05 -- host/multipath.sh@114 -- # killprocess 72453 00:16:47.577 18:15:05 -- common/autotest_common.sh@936 -- # '[' -z 72453 ']' 00:16:47.577 18:15:05 -- common/autotest_common.sh@940 -- # kill -0 72453 00:16:47.577 18:15:05 -- common/autotest_common.sh@941 -- # uname 00:16:47.577 18:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.577 18:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72453 00:16:47.577 killing process with pid 72453 00:16:47.577 18:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:47.577 18:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:47.577 18:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72453' 00:16:47.577 18:15:05 -- common/autotest_common.sh@955 -- # kill 72453 00:16:47.577 18:15:05 -- common/autotest_common.sh@960 -- # wait 72453 00:16:47.577 Connection closed with partial response: 00:16:47.577 00:16:47.577 00:16:47.577 18:15:05 -- host/multipath.sh@116 -- # wait 72453 00:16:47.577 18:15:05 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.577 [2024-11-18 18:14:07.776557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:47.577 [2024-11-18 18:14:07.776663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72453 ] 00:16:47.577 [2024-11-18 18:14:07.918341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.577 [2024-11-18 18:14:07.988301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.577 Running I/O for 90 seconds... 00:16:47.577 [2024-11-18 18:14:17.935204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.935627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.935791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.935864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.935929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.935997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.577 [2024-11-18 18:14:17.936304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.577 [2024-11-18 18:14:17.936325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.577 [2024-11-18 18:14:17.936354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.936559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.936703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.936744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.936795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.936904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.936935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.937341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.937487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.578 [2024-11-18 18:14:17.937660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.578 [2024-11-18 18:14:17.937926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.578 [2024-11-18 18:14:17.937947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.937977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.937998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.938888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.938973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.938993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.579 [2024-11-18 18:14:17.939498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.579 [2024-11-18 18:14:17.939520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.579 [2024-11-18 18:14:17.939535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.939615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.939915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.941847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.941959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.941974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.942068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.942378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.942454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.580 [2024-11-18 18:14:17.942855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.580 [2024-11-18 18:14:17.942907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.580 [2024-11-18 18:14:17.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.942944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.942966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.942981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.943337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.943373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.943666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.943746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.943769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.943785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.944123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.581 [2024-11-18 18:14:17.944167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.581 [2024-11-18 18:14:17.944475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.581 [2024-11-18 18:14:17.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.944731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.944882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.944979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.944995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.945042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.945081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.945118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.945156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.945193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.945230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.945253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.955509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.955593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.955967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.955996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.956017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.956046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.956066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.956096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.582 [2024-11-18 18:14:17.956127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.956166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.956189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.956219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.582 [2024-11-18 18:14:17.956239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.582 [2024-11-18 18:14:17.956268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.956929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.956967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.956997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.957921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.957970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.957999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.958019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.958068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.958146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.958284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.583 [2024-11-18 18:14:17.958334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.583 [2024-11-18 18:14:17.958363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.583 [2024-11-18 18:14:17.958384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.958619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.958669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.958718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.958827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.958957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.958977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.959565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.959959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.959978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.960027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.960077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.584 [2024-11-18 18:14:17.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.584 [2024-11-18 18:14:17.960411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.584 [2024-11-18 18:14:17.960431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.960460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.960480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.962954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.962998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.963060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.963117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.963811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.963962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.963983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.964032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.964229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.964769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.585 [2024-11-18 18:14:17.964819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.964877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.585 [2024-11-18 18:14:17.964933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.585 [2024-11-18 18:14:17.964963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.964983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.965926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.965955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.965975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.586 [2024-11-18 18:14:17.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.586 [2024-11-18 18:14:17.966656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.586 [2024-11-18 18:14:17.966678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.966975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.967933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.967977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.968012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.968047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.968118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.587 [2024-11-18 18:14:17.968153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.968188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.587 [2024-11-18 18:14:17.968209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.587 [2024-11-18 18:14:17.968223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.968581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.968631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.968855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.968877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.968901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.970968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.971117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.971158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.971196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.971755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.588 [2024-11-18 18:14:17.971908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.588 [2024-11-18 18:14:17.971960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.588 [2024-11-18 18:14:17.971974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.971995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.972089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.972437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.972570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.972626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.972973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.972987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.973021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.973135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.973270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.973304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.589 [2024-11-18 18:14:17.973371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.589 [2024-11-18 18:14:17.973424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.589 [2024-11-18 18:14:17.973445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.973955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.973975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.590 [2024-11-18 18:14:17.974745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.590 [2024-11-18 18:14:17.974767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.590 [2024-11-18 18:14:17.974781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.974817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.974838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.974883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.974903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.974917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.974937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.974951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.974992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.975850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.975981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.976014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.976028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.976048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.976062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.976082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:17.976097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:17.977204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.591 [2024-11-18 18:14:17.977233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:24.470906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:24.470967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:24.471041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.591 [2024-11-18 18:14:24.471063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.591 [2024-11-18 18:14:24.471085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.471769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.471993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.592 [2024-11-18 18:14:24.472607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.592 [2024-11-18 18:14:24.472641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.592 [2024-11-18 18:14:24.472661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.472675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.472792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.472983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.472997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.593 [2024-11-18 18:14:24.473706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.593 [2024-11-18 18:14:24.473961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.593 [2024-11-18 18:14:24.473975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.473995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.474535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.474776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.475629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.475672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.475714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.475756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.475798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.475852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.475925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.475966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.475994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.476008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.476050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.476092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.476197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.476244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.594 [2024-11-18 18:14:24.476287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.476329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.594 [2024-11-18 18:14:24.476371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.594 [2024-11-18 18:14:24.476407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:24.476423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:24.476451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:24.476465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:24.476493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:24.476507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:24.476535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:24.476549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:24.476590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:24.476607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.632783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.632876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.632912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.632945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.632977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.633528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.595 [2024-11-18 18:14:31.633577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:47.595 [2024-11-18 18:14:31.633788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.595 [2024-11-18 18:14:31.633802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.633823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.633838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.633858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.633872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.633892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.633906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.633941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.633992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.634488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.634973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.634987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.635022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.635057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.635117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.596 [2024-11-18 18:14:31.635153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.635187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.635222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.635256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.596 [2024-11-18 18:14:31.635290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:47.596 [2024-11-18 18:14:31.635311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.635474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.635662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.635731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.635765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.635980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.636119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.636257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.636299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.597 [2024-11-18 18:14:31.636404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:47.597 [2024-11-18 18:14:31.636679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.597 [2024-11-18 18:14:31.636694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.636721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.636736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.636756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.636771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.636791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.636806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.637714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.637764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.637808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.637851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.637894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.637937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.637965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.637979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:31.638512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:31.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.598 [2024-11-18 18:14:31.638652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.598 [2024-11-18 18:14:45.067453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.598 [2024-11-18 18:14:45.067518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.598 [2024-11-18 18:14:45.067559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:47.598 [2024-11-18 18:14:45.067584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1881b20 is same with the state(5) to be set 00:16:47.598 [2024-11-18 18:14:45.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.067962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.067975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.068005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.068062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.068078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.068108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.598 [2024-11-18 18:14:45.068124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.598 [2024-11-18 18:14:45.068138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.068928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.068980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.068993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.069018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.069070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.069122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.599 [2024-11-18 18:14:45.069173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.599 [2024-11-18 18:14:45.069265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.599 [2024-11-18 18:14:45.069277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.069859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.069976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.069988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.600 [2024-11-18 18:14:45.070407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.600 [2024-11-18 18:14:45.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.600 [2024-11-18 18:14:45.070437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.070902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.070979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.070993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.071085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.071110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.071136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:47.601 [2024-11-18 18:14:45.071274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.601 [2024-11-18 18:14:45.071537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.601 [2024-11-18 18:14:45.071568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:47.602 [2024-11-18 18:14:45.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.602 [2024-11-18 18:14:45.071621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4c50 is same with the state(5) to be set 00:16:47.602 [2024-11-18 18:14:45.071638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:47.602 [2024-11-18 18:14:45.071648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:47.602 [2024-11-18 18:14:45.071658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3192 len:8 PRP1 0x0 PRP2 0x0 00:16:47.602 [2024-11-18 18:14:45.071671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:47.602 [2024-11-18 18:14:45.071717] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18a4c50 was disconnected and freed. reset controller. 00:16:47.602 [2024-11-18 18:14:45.072788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:47.602 [2024-11-18 18:14:45.072828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1881b20 (9): Bad file descriptor 00:16:47.602 [2024-11-18 18:14:45.073144] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.602 [2024-11-18 18:14:45.073217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.602 [2024-11-18 18:14:45.073267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.602 [2024-11-18 18:14:45.073288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1881b20 with addr=10.0.0.2, port=4421 00:16:47.602 [2024-11-18 18:14:45.073303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1881b20 is same with the state(5) to be set 00:16:47.602 [2024-11-18 18:14:45.073336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1881b20 (9): Bad file descriptor 00:16:47.602 [2024-11-18 18:14:45.073365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:47.602 [2024-11-18 18:14:45.073381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:47.602 [2024-11-18 18:14:45.073396] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:47.602 [2024-11-18 18:14:45.073448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:47.602 [2024-11-18 18:14:45.073468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:47.602 [2024-11-18 18:14:55.119675] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:47.602 Received shutdown signal, test time was about 55.543623 seconds 00:16:47.602 00:16:47.602 Latency(us) 00:16:47.602 [2024-11-18T18:15:06.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.602 [2024-11-18T18:15:06.206Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:47.602 Verification LBA range: start 0x0 length 0x4000 00:16:47.602 Nvme0n1 : 55.54 11162.91 43.61 0.00 0.00 11447.88 156.39 7046430.72 00:16:47.602 [2024-11-18T18:15:06.206Z] =================================================================================================================== 00:16:47.602 [2024-11-18T18:15:06.206Z] Total : 11162.91 43.61 0.00 0.00 11447.88 156.39 7046430.72 00:16:47.602 18:15:05 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.602 18:15:05 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:16:47.602 18:15:05 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.602 18:15:05 -- host/multipath.sh@125 -- # nvmftestfini 00:16:47.602 18:15:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:47.602 18:15:05 -- nvmf/common.sh@116 -- # sync 00:16:47.602 18:15:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:47.602 18:15:05 -- nvmf/common.sh@119 -- # set +e 00:16:47.602 18:15:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:47.602 18:15:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:47.602 rmmod nvme_tcp 00:16:47.602 rmmod nvme_fabrics 00:16:47.602 rmmod nvme_keyring 00:16:47.602 18:15:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:47.602 18:15:05 -- nvmf/common.sh@123 -- # set -e 00:16:47.602 18:15:05 -- nvmf/common.sh@124 -- # return 0 00:16:47.602 18:15:05 -- nvmf/common.sh@477 -- # '[' -n 72396 ']' 00:16:47.602 18:15:05 -- nvmf/common.sh@478 -- # killprocess 72396 00:16:47.602 18:15:05 -- common/autotest_common.sh@936 -- # '[' -z 72396 ']' 00:16:47.602 18:15:05 -- common/autotest_common.sh@940 -- # kill -0 72396 00:16:47.602 18:15:05 -- common/autotest_common.sh@941 -- # uname 00:16:47.602 18:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.602 18:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72396 00:16:47.602 18:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:47.602 18:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:47.602 18:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72396' 00:16:47.602 killing process with pid 72396 00:16:47.602 18:15:05 -- common/autotest_common.sh@955 -- # kill 72396 00:16:47.602 18:15:05 -- common/autotest_common.sh@960 -- # wait 72396 00:16:47.602 18:15:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:47.602 18:15:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:47.602 18:15:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:47.602 18:15:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.602 18:15:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:47.602 18:15:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.602 18:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.602 18:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.602 18:15:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:47.602 ************************************ 00:16:47.602 END TEST nvmf_multipath 00:16:47.602 ************************************ 00:16:47.602 00:16:47.602 real 1m1.388s 00:16:47.602 user 2m49.254s 00:16:47.602 sys 0m19.066s 00:16:47.602 18:15:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:47.602 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:16:47.862 18:15:06 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:47.862 18:15:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:47.862 18:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:47.862 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:16:47.862 ************************************ 00:16:47.862 START TEST nvmf_timeout 00:16:47.862 ************************************ 00:16:47.862 18:15:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:16:47.862 * Looking for test storage... 00:16:47.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.862 18:15:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:47.862 18:15:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:47.862 18:15:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:47.862 18:15:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:47.862 18:15:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:47.862 18:15:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:47.862 18:15:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:47.862 18:15:06 -- scripts/common.sh@335 -- # IFS=.-: 00:16:47.862 18:15:06 -- scripts/common.sh@335 -- # read -ra ver1 00:16:47.862 18:15:06 -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.862 18:15:06 -- scripts/common.sh@336 -- # read -ra ver2 00:16:47.862 18:15:06 -- scripts/common.sh@337 -- # local 'op=<' 00:16:47.862 18:15:06 -- scripts/common.sh@339 -- # ver1_l=2 00:16:47.862 18:15:06 -- scripts/common.sh@340 -- # ver2_l=1 00:16:47.862 18:15:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:47.862 18:15:06 -- scripts/common.sh@343 -- # case "$op" in 00:16:47.862 18:15:06 -- scripts/common.sh@344 -- # : 1 00:16:47.862 18:15:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:47.862 18:15:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.862 18:15:06 -- scripts/common.sh@364 -- # decimal 1 00:16:47.862 18:15:06 -- scripts/common.sh@352 -- # local d=1 00:16:47.862 18:15:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.862 18:15:06 -- scripts/common.sh@354 -- # echo 1 00:16:47.862 18:15:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:47.862 18:15:06 -- scripts/common.sh@365 -- # decimal 2 00:16:47.863 18:15:06 -- scripts/common.sh@352 -- # local d=2 00:16:47.863 18:15:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.863 18:15:06 -- scripts/common.sh@354 -- # echo 2 00:16:47.863 18:15:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:47.863 18:15:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:47.863 18:15:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:47.863 18:15:06 -- scripts/common.sh@367 -- # return 0 00:16:47.863 18:15:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.863 18:15:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.863 --rc genhtml_branch_coverage=1 00:16:47.863 --rc genhtml_function_coverage=1 00:16:47.863 --rc genhtml_legend=1 00:16:47.863 --rc geninfo_all_blocks=1 00:16:47.863 --rc geninfo_unexecuted_blocks=1 00:16:47.863 00:16:47.863 ' 00:16:47.863 18:15:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.863 --rc genhtml_branch_coverage=1 00:16:47.863 --rc genhtml_function_coverage=1 00:16:47.863 --rc genhtml_legend=1 00:16:47.863 --rc geninfo_all_blocks=1 00:16:47.863 --rc geninfo_unexecuted_blocks=1 00:16:47.863 00:16:47.863 ' 00:16:47.863 18:15:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.863 --rc genhtml_branch_coverage=1 00:16:47.863 --rc genhtml_function_coverage=1 00:16:47.863 --rc genhtml_legend=1 00:16:47.863 --rc geninfo_all_blocks=1 00:16:47.863 --rc geninfo_unexecuted_blocks=1 00:16:47.863 00:16:47.863 ' 00:16:47.863 18:15:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.863 --rc genhtml_branch_coverage=1 00:16:47.863 --rc genhtml_function_coverage=1 00:16:47.863 --rc genhtml_legend=1 00:16:47.863 --rc geninfo_all_blocks=1 00:16:47.863 --rc geninfo_unexecuted_blocks=1 00:16:47.863 00:16:47.863 ' 00:16:47.863 18:15:06 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.863 18:15:06 -- nvmf/common.sh@7 -- # uname -s 00:16:47.863 18:15:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.863 18:15:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.863 18:15:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.863 18:15:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.863 18:15:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.863 18:15:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.863 18:15:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.863 18:15:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.863 18:15:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.863 18:15:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:16:47.863 18:15:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:16:47.863 18:15:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.863 18:15:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.863 18:15:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.863 18:15:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.863 18:15:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.863 18:15:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.863 18:15:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.863 18:15:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.863 18:15:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.863 18:15:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.863 18:15:06 -- paths/export.sh@5 -- # export PATH 00:16:47.863 18:15:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.863 18:15:06 -- nvmf/common.sh@46 -- # : 0 00:16:47.863 18:15:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:47.863 18:15:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:47.863 18:15:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:47.863 18:15:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.863 18:15:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.863 18:15:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:47.863 18:15:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:47.863 18:15:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:47.863 18:15:06 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.863 18:15:06 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.863 18:15:06 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.863 18:15:06 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:47.863 18:15:06 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.863 18:15:06 -- host/timeout.sh@19 -- # nvmftestinit 00:16:47.863 18:15:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:47.863 18:15:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.863 18:15:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:47.863 18:15:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:47.863 18:15:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:47.863 18:15:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.863 18:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.863 18:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.863 18:15:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:47.863 18:15:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:47.863 18:15:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.863 18:15:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.863 18:15:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:47.863 18:15:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:47.863 18:15:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.863 18:15:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.863 18:15:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.863 18:15:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.863 18:15:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.863 18:15:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.863 18:15:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.863 18:15:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.863 18:15:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:47.863 18:15:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:47.863 Cannot find device "nvmf_tgt_br" 00:16:47.863 18:15:06 -- nvmf/common.sh@154 -- # true 00:16:47.863 18:15:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.863 Cannot find device "nvmf_tgt_br2" 00:16:47.863 18:15:06 -- nvmf/common.sh@155 -- # true 00:16:47.863 18:15:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:47.863 18:15:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:47.863 Cannot find device "nvmf_tgt_br" 00:16:47.863 18:15:06 -- nvmf/common.sh@157 -- # true 00:16:47.863 18:15:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:47.863 Cannot find device "nvmf_tgt_br2" 00:16:47.863 18:15:06 -- nvmf/common.sh@158 -- # true 00:16:47.863 18:15:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:48.122 18:15:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:48.122 18:15:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.122 18:15:06 -- nvmf/common.sh@161 -- # true 00:16:48.122 18:15:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.122 18:15:06 -- nvmf/common.sh@162 -- # true 00:16:48.122 18:15:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.122 18:15:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.122 18:15:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.122 18:15:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.122 18:15:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.122 18:15:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.122 18:15:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.122 18:15:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.122 18:15:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.122 18:15:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:48.122 18:15:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:48.122 18:15:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:48.122 18:15:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:48.122 18:15:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.122 18:15:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.122 18:15:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.122 18:15:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:48.122 18:15:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:48.122 18:15:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.122 18:15:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.122 18:15:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.122 18:15:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.122 18:15:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.122 18:15:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:48.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:16:48.122 00:16:48.122 --- 10.0.0.2 ping statistics --- 00:16:48.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.122 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:48.122 18:15:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:48.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:48.122 00:16:48.122 --- 10.0.0.3 ping statistics --- 00:16:48.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.122 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:48.122 18:15:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:48.122 00:16:48.122 --- 10.0.0.1 ping statistics --- 00:16:48.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.122 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:48.122 18:15:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.122 18:15:06 -- nvmf/common.sh@421 -- # return 0 00:16:48.122 18:15:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.122 18:15:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.122 18:15:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.122 18:15:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.122 18:15:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.122 18:15:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.122 18:15:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.122 18:15:06 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:16:48.122 18:15:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.122 18:15:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.122 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:16:48.122 18:15:06 -- nvmf/common.sh@469 -- # nvmfpid=73579 00:16:48.122 18:15:06 -- nvmf/common.sh@470 -- # waitforlisten 73579 00:16:48.123 18:15:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:48.123 18:15:06 -- common/autotest_common.sh@829 -- # '[' -z 73579 ']' 00:16:48.123 18:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.123 18:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.123 18:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.123 18:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.123 18:15:06 -- common/autotest_common.sh@10 -- # set +x 00:16:48.381 [2024-11-18 18:15:06.767821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.381 [2024-11-18 18:15:06.767931] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.381 [2024-11-18 18:15:06.900822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.381 [2024-11-18 18:15:06.953957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.381 [2024-11-18 18:15:06.954120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.381 [2024-11-18 18:15:06.954132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.381 [2024-11-18 18:15:06.954141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.381 [2024-11-18 18:15:06.954306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.381 [2024-11-18 18:15:06.954319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.316 18:15:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.316 18:15:07 -- common/autotest_common.sh@862 -- # return 0 00:16:49.316 18:15:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.316 18:15:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.316 18:15:07 -- common/autotest_common.sh@10 -- # set +x 00:16:49.316 18:15:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.316 18:15:07 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.316 18:15:07 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.575 [2024-11-18 18:15:08.074020] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.575 18:15:08 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:49.833 Malloc0 00:16:49.833 18:15:08 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.091 18:15:08 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.349 18:15:08 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.608 [2024-11-18 18:15:09.086221] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.608 18:15:09 -- host/timeout.sh@32 -- # bdevperf_pid=73638 00:16:50.608 18:15:09 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:16:50.608 18:15:09 -- host/timeout.sh@34 -- # waitforlisten 73638 /var/tmp/bdevperf.sock 00:16:50.608 18:15:09 -- common/autotest_common.sh@829 -- # '[' -z 73638 ']' 00:16:50.608 18:15:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.608 18:15:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.608 18:15:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.608 18:15:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.608 18:15:09 -- common/autotest_common.sh@10 -- # set +x 00:16:50.608 [2024-11-18 18:15:09.150104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.608 [2024-11-18 18:15:09.150176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73638 ] 00:16:50.867 [2024-11-18 18:15:09.287167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.867 [2024-11-18 18:15:09.356432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.802 18:15:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.802 18:15:10 -- common/autotest_common.sh@862 -- # return 0 00:16:51.802 18:15:10 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:51.802 18:15:10 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:16:52.062 NVMe0n1 00:16:52.062 18:15:10 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:52.062 18:15:10 -- host/timeout.sh@51 -- # rpc_pid=73657 00:16:52.062 18:15:10 -- host/timeout.sh@53 -- # sleep 1 00:16:52.320 Running I/O for 10 seconds... 00:16:53.254 18:15:11 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.516 [2024-11-18 18:15:11.886689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.516 [2024-11-18 18:15:11.886909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886917] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.886993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d1480 is same with the state(5) to be set 00:16:53.517 [2024-11-18 18:15:11.887184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.517 [2024-11-18 18:15:11.887796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.517 [2024-11-18 18:15:11.887830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.517 [2024-11-18 18:15:11.887839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.887860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.887882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.887904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.887925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.887946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.887966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.887987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.887999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.518 [2024-11-18 18:15:11.888638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.518 [2024-11-18 18:15:11.888670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.518 [2024-11-18 18:15:11.888680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.888989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.888999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.519 [2024-11-18 18:15:11.889273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.519 [2024-11-18 18:15:11.889464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.519 [2024-11-18 18:15:11.889475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:53.520 [2024-11-18 18:15:11.889857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.520 [2024-11-18 18:15:11.889942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.889953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b0c0 is same with the state(5) to be set 00:16:53.520 [2024-11-18 18:15:11.889966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:53.520 [2024-11-18 18:15:11.889975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:53.520 [2024-11-18 18:15:11.889983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128344 len:8 PRP1 0x0 PRP2 0x0 00:16:53.520 [2024-11-18 18:15:11.889992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.520 [2024-11-18 18:15:11.890036] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x78b0c0 was disconnected and freed. reset controller. 00:16:53.520 [2024-11-18 18:15:11.890284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.520 [2024-11-18 18:15:11.890385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x728010 (9): Bad file descriptor 00:16:53.520 [2024-11-18 18:15:11.890488] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.520 [2024-11-18 18:15:11.890570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.520 [2024-11-18 18:15:11.890619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:53.520 [2024-11-18 18:15:11.890642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x728010 with addr=10.0.0.2, port=4420 00:16:53.520 [2024-11-18 18:15:11.890655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x728010 is same with the state(5) to be set 00:16:53.520 [2024-11-18 18:15:11.890676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x728010 (9): Bad file descriptor 00:16:53.520 [2024-11-18 18:15:11.890694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.520 [2024-11-18 18:15:11.890704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:53.520 [2024-11-18 18:15:11.890715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.520 [2024-11-18 18:15:11.890736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:53.520 [2024-11-18 18:15:11.890748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.520 18:15:11 -- host/timeout.sh@56 -- # sleep 2 00:16:55.422 [2024-11-18 18:15:13.890854] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:55.422 [2024-11-18 18:15:13.890953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:55.422 [2024-11-18 18:15:13.890996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:55.422 [2024-11-18 18:15:13.891029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x728010 with addr=10.0.0.2, port=4420 00:16:55.422 [2024-11-18 18:15:13.891059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x728010 is same with the state(5) to be set 00:16:55.422 [2024-11-18 18:15:13.891085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x728010 (9): Bad file descriptor 00:16:55.422 [2024-11-18 18:15:13.891106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:55.422 [2024-11-18 18:15:13.891116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:55.422 [2024-11-18 18:15:13.891127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:55.422 [2024-11-18 18:15:13.891153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:55.422 [2024-11-18 18:15:13.891167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:55.422 18:15:13 -- host/timeout.sh@57 -- # get_controller 00:16:55.422 18:15:13 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:55.422 18:15:13 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:16:55.680 18:15:14 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:16:55.680 18:15:14 -- host/timeout.sh@58 -- # get_bdev 00:16:55.680 18:15:14 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:16:55.680 18:15:14 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:16:55.939 18:15:14 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:16:55.939 18:15:14 -- host/timeout.sh@61 -- # sleep 5 00:16:57.420 [2024-11-18 18:15:15.891332] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:57.420 [2024-11-18 18:15:15.891445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:57.420 [2024-11-18 18:15:15.891493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:57.420 [2024-11-18 18:15:15.891512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x728010 with addr=10.0.0.2, port=4420 00:16:57.420 [2024-11-18 18:15:15.891545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x728010 is same with the state(5) to be set 00:16:57.420 [2024-11-18 18:15:15.891574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x728010 (9): Bad file descriptor 00:16:57.420 [2024-11-18 18:15:15.891597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:57.420 [2024-11-18 18:15:15.891608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:57.420 [2024-11-18 18:15:15.891620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:57.420 [2024-11-18 18:15:15.891648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:57.420 [2024-11-18 18:15:15.891661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.324 [2024-11-18 18:15:17.891693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.324 [2024-11-18 18:15:17.891779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.324 [2024-11-18 18:15:17.891807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:59.324 [2024-11-18 18:15:17.891817] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:16:59.324 [2024-11-18 18:15:17.891846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:00.701 00:17:00.701 Latency(us) 00:17:00.701 [2024-11-18T18:15:19.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.701 [2024-11-18T18:15:19.305Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:00.701 Verification LBA range: start 0x0 length 0x4000 00:17:00.701 NVMe0n1 : 8.17 1958.24 7.65 15.68 0.00 64760.14 2889.54 7015926.69 00:17:00.701 [2024-11-18T18:15:19.305Z] =================================================================================================================== 00:17:00.701 [2024-11-18T18:15:19.305Z] Total : 1958.24 7.65 15.68 0.00 64760.14 2889.54 7015926.69 00:17:00.701 0 00:17:00.959 18:15:19 -- host/timeout.sh@62 -- # get_controller 00:17:00.959 18:15:19 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:00.959 18:15:19 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:01.217 18:15:19 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:01.217 18:15:19 -- host/timeout.sh@63 -- # get_bdev 00:17:01.217 18:15:19 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:01.217 18:15:19 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:01.475 18:15:20 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:01.475 18:15:20 -- host/timeout.sh@65 -- # wait 73657 00:17:01.475 18:15:20 -- host/timeout.sh@67 -- # killprocess 73638 00:17:01.475 18:15:20 -- common/autotest_common.sh@936 -- # '[' -z 73638 ']' 00:17:01.475 18:15:20 -- common/autotest_common.sh@940 -- # kill -0 73638 00:17:01.475 18:15:20 -- common/autotest_common.sh@941 -- # uname 00:17:01.475 18:15:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.475 18:15:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73638 00:17:01.475 18:15:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:01.475 18:15:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:01.475 killing process with pid 73638 00:17:01.475 18:15:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73638' 00:17:01.475 18:15:20 -- common/autotest_common.sh@955 -- # kill 73638 00:17:01.475 Received shutdown signal, test time was about 9.341392 seconds 00:17:01.475 00:17:01.475 Latency(us) 00:17:01.475 [2024-11-18T18:15:20.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.475 [2024-11-18T18:15:20.079Z] =================================================================================================================== 00:17:01.475 [2024-11-18T18:15:20.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.475 18:15:20 -- common/autotest_common.sh@960 -- # wait 73638 00:17:01.734 18:15:20 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.993 [2024-11-18 18:15:20.487669] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.993 18:15:20 -- host/timeout.sh@74 -- # bdevperf_pid=73781 00:17:01.993 18:15:20 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:01.993 18:15:20 -- host/timeout.sh@76 -- # waitforlisten 73781 /var/tmp/bdevperf.sock 00:17:01.993 18:15:20 -- common/autotest_common.sh@829 -- # '[' -z 73781 ']' 00:17:01.993 18:15:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.993 18:15:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.993 18:15:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.993 18:15:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.993 18:15:20 -- common/autotest_common.sh@10 -- # set +x 00:17:01.993 [2024-11-18 18:15:20.551527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:01.993 [2024-11-18 18:15:20.551849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73781 ] 00:17:02.252 [2024-11-18 18:15:20.685771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.252 [2024-11-18 18:15:20.744230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.187 18:15:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.187 18:15:21 -- common/autotest_common.sh@862 -- # return 0 00:17:03.187 18:15:21 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:03.187 18:15:21 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:03.445 NVMe0n1 00:17:03.445 18:15:22 -- host/timeout.sh@84 -- # rpc_pid=73808 00:17:03.445 18:15:22 -- host/timeout.sh@86 -- # sleep 1 00:17:03.445 18:15:22 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:03.703 Running I/O for 10 seconds... 00:17:04.638 18:15:23 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.898 [2024-11-18 18:15:23.281882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.281946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.281974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.281982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.281989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.281997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282021] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8317b0 is same with the state(5) to be set 00:17:04.899 [2024-11-18 18:15:23.282116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.899 [2024-11-18 18:15:23.282484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.899 [2024-11-18 18:15:23.282563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.899 [2024-11-18 18:15:23.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.899 [2024-11-18 18:15:23.282609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.899 [2024-11-18 18:15:23.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.899 [2024-11-18 18:15:23.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.899 [2024-11-18 18:15:23.282813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.282940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.282979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.282990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.282998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.900 [2024-11-18 18:15:23.283295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.900 [2024-11-18 18:15:23.283353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.900 [2024-11-18 18:15:23.283364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.283968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.283988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.283999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.901 [2024-11-18 18:15:23.284106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.284126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.284165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.284185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.901 [2024-11-18 18:15:23.284196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.901 [2024-11-18 18:15:23.284205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:04.902 [2024-11-18 18:15:23.284731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.902 [2024-11-18 18:15:23.284898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dd0c0 is same with the state(5) to be set 00:17:04.902 [2024-11-18 18:15:23.284921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:04.902 [2024-11-18 18:15:23.284928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:04.902 [2024-11-18 18:15:23.284937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124264 len:8 PRP1 0x0 PRP2 0x0 00:17:04.902 [2024-11-18 18:15:23.284945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.284988] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8dd0c0 was disconnected and freed. reset controller. 00:17:04.902 [2024-11-18 18:15:23.285079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.902 [2024-11-18 18:15:23.285111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.285122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.902 [2024-11-18 18:15:23.285134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.285143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.902 [2024-11-18 18:15:23.285152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.902 [2024-11-18 18:15:23.285161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.902 [2024-11-18 18:15:23.285170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.903 [2024-11-18 18:15:23.285178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:04.903 [2024-11-18 18:15:23.285384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.903 [2024-11-18 18:15:23.285404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:04.903 [2024-11-18 18:15:23.285495] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:04.903 [2024-11-18 18:15:23.285557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:04.903 [2024-11-18 18:15:23.285617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:04.903 [2024-11-18 18:15:23.285633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:04.903 [2024-11-18 18:15:23.285644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:04.903 [2024-11-18 18:15:23.285662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:04.903 [2024-11-18 18:15:23.285681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:04.903 [2024-11-18 18:15:23.285691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:04.903 [2024-11-18 18:15:23.285700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:04.903 [2024-11-18 18:15:23.285720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:04.903 [2024-11-18 18:15:23.285731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.903 18:15:23 -- host/timeout.sh@90 -- # sleep 1 00:17:05.838 [2024-11-18 18:15:24.285842] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.838 [2024-11-18 18:15:24.285975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.838 [2024-11-18 18:15:24.286019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:05.838 [2024-11-18 18:15:24.286050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:05.838 [2024-11-18 18:15:24.286064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:05.838 [2024-11-18 18:15:24.286088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:05.838 [2024-11-18 18:15:24.286106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:05.838 [2024-11-18 18:15:24.286115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:05.838 [2024-11-18 18:15:24.286125] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:05.838 [2024-11-18 18:15:24.286152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:05.838 [2024-11-18 18:15:24.286179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:05.838 18:15:24 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.097 [2024-11-18 18:15:24.571497] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.097 18:15:24 -- host/timeout.sh@92 -- # wait 73808 00:17:07.035 [2024-11-18 18:15:25.306124] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:13.657 00:17:13.657 Latency(us) 00:17:13.657 [2024-11-18T18:15:32.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.657 [2024-11-18T18:15:32.261Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:13.657 Verification LBA range: start 0x0 length 0x4000 00:17:13.657 NVMe0n1 : 10.01 9742.57 38.06 0.00 0.00 13117.37 852.71 3019898.88 00:17:13.657 [2024-11-18T18:15:32.261Z] =================================================================================================================== 00:17:13.657 [2024-11-18T18:15:32.261Z] Total : 9742.57 38.06 0.00 0.00 13117.37 852.71 3019898.88 00:17:13.657 0 00:17:13.657 18:15:32 -- host/timeout.sh@97 -- # rpc_pid=73909 00:17:13.657 18:15:32 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.657 18:15:32 -- host/timeout.sh@98 -- # sleep 1 00:17:13.916 Running I/O for 10 seconds... 00:17:14.854 18:15:33 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.854 [2024-11-18 18:15:33.444801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.854 [2024-11-18 18:15:33.444856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.854 [2024-11-18 18:15:33.444869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.854 [2024-11-18 18:15:33.444878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.854 [2024-11-18 18:15:33.444886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.854 [2024-11-18 18:15:33.444895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.444999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8304a0 is same with the state(5) to be set 00:17:14.855 [2024-11-18 18:15:33.445213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.855 [2024-11-18 18:15:33.445820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.855 [2024-11-18 18:15:33.445830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.445851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.445881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.445902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.445923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.445944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.445965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.445987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.445998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.446115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.446137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.446223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.856 [2024-11-18 18:15:33.446244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.856 [2024-11-18 18:15:33.446385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.856 [2024-11-18 18:15:33.446396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.116 [2024-11-18 18:15:33.446448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.116 [2024-11-18 18:15:33.446576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.116 [2024-11-18 18:15:33.446599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.116 [2024-11-18 18:15:33.446621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.116 [2024-11-18 18:15:33.446642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.116 [2024-11-18 18:15:33.446653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.446698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.446720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.446822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.446843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.446967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.446979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.117 [2024-11-18 18:15:33.447300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.117 [2024-11-18 18:15:33.447451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.117 [2024-11-18 18:15:33.447460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:15.118 [2024-11-18 18:15:33.447935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.447980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.447991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.448000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.448020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.118 [2024-11-18 18:15:33.448055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f0d50 is same with the state(5) to be set 00:17:15.118 [2024-11-18 18:15:33.448077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:15.118 [2024-11-18 18:15:33.448085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:15.118 [2024-11-18 18:15:33.448093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128176 len:8 PRP1 0x0 PRP2 0x0 00:17:15.118 [2024-11-18 18:15:33.448102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448144] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f0d50 was disconnected and freed. reset controller. 00:17:15.118 [2024-11-18 18:15:33.448228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.118 [2024-11-18 18:15:33.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.118 [2024-11-18 18:15:33.448264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.118 [2024-11-18 18:15:33.448282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.118 [2024-11-18 18:15:33.448300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.118 [2024-11-18 18:15:33.448309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:15.118 [2024-11-18 18:15:33.448537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:15.118 [2024-11-18 18:15:33.448558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:15.118 [2024-11-18 18:15:33.448685] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.118 [2024-11-18 18:15:33.448741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.118 [2024-11-18 18:15:33.448784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.118 [2024-11-18 18:15:33.448800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:15.118 [2024-11-18 18:15:33.448810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:15.118 [2024-11-18 18:15:33.448829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:15.118 [2024-11-18 18:15:33.448851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:15.118 [2024-11-18 18:15:33.448860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:15.118 [2024-11-18 18:15:33.448871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:15.118 [2024-11-18 18:15:33.448892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:15.118 [2024-11-18 18:15:33.448903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:15.119 18:15:33 -- host/timeout.sh@101 -- # sleep 3 00:17:16.053 [2024-11-18 18:15:34.449005] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.053 [2024-11-18 18:15:34.449099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.053 [2024-11-18 18:15:34.449139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.053 [2024-11-18 18:15:34.449155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:16.053 [2024-11-18 18:15:34.449168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:16.053 [2024-11-18 18:15:34.449191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:16.053 [2024-11-18 18:15:34.449208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:16.053 [2024-11-18 18:15:34.449216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:16.053 [2024-11-18 18:15:34.449226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.053 [2024-11-18 18:15:34.449251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:16.053 [2024-11-18 18:15:34.449261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:16.987 [2024-11-18 18:15:35.449389] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.987 [2024-11-18 18:15:35.449486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.987 [2024-11-18 18:15:35.449530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.987 [2024-11-18 18:15:35.449573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:16.987 [2024-11-18 18:15:35.449588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:16.987 [2024-11-18 18:15:35.449613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:16.987 [2024-11-18 18:15:35.449631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:16.987 [2024-11-18 18:15:35.449641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:16.987 [2024-11-18 18:15:35.449651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.987 [2024-11-18 18:15:35.449677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:16.987 [2024-11-18 18:15:35.449688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.923 [2024-11-18 18:15:36.451505] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.923 [2024-11-18 18:15:36.451627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.923 [2024-11-18 18:15:36.451669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.923 [2024-11-18 18:15:36.451685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87a010 with addr=10.0.0.2, port=4420 00:17:17.923 [2024-11-18 18:15:36.451696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a010 is same with the state(5) to be set 00:17:17.923 [2024-11-18 18:15:36.451865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87a010 (9): Bad file descriptor 00:17:17.923 [2024-11-18 18:15:36.452008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.923 [2024-11-18 18:15:36.452020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:17.923 [2024-11-18 18:15:36.452030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.923 [2024-11-18 18:15:36.454356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.923 [2024-11-18 18:15:36.454384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.923 18:15:36 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.182 [2024-11-18 18:15:36.729251] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.182 18:15:36 -- host/timeout.sh@103 -- # wait 73909 00:17:19.118 [2024-11-18 18:15:37.472468] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:24.384 00:17:24.384 Latency(us) 00:17:24.384 [2024-11-18T18:15:42.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.384 [2024-11-18T18:15:42.988Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:24.384 Verification LBA range: start 0x0 length 0x4000 00:17:24.384 NVMe0n1 : 10.01 8392.08 32.78 6041.59 0.00 8853.16 495.24 3019898.88 00:17:24.384 [2024-11-18T18:15:42.988Z] =================================================================================================================== 00:17:24.384 [2024-11-18T18:15:42.988Z] Total : 8392.08 32.78 6041.59 0.00 8853.16 0.00 3019898.88 00:17:24.384 0 00:17:24.384 18:15:42 -- host/timeout.sh@105 -- # killprocess 73781 00:17:24.384 18:15:42 -- common/autotest_common.sh@936 -- # '[' -z 73781 ']' 00:17:24.384 18:15:42 -- common/autotest_common.sh@940 -- # kill -0 73781 00:17:24.384 18:15:42 -- common/autotest_common.sh@941 -- # uname 00:17:24.384 18:15:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.384 18:15:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73781 00:17:24.384 killing process with pid 73781 00:17:24.384 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.384 00:17:24.384 Latency(us) 00:17:24.384 [2024-11-18T18:15:42.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.384 [2024-11-18T18:15:42.988Z] =================================================================================================================== 00:17:24.384 [2024-11-18T18:15:42.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.384 18:15:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:24.384 18:15:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:24.384 18:15:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73781' 00:17:24.384 18:15:42 -- common/autotest_common.sh@955 -- # kill 73781 00:17:24.384 18:15:42 -- common/autotest_common.sh@960 -- # wait 73781 00:17:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.385 18:15:42 -- host/timeout.sh@110 -- # bdevperf_pid=74029 00:17:24.385 18:15:42 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:17:24.385 18:15:42 -- host/timeout.sh@112 -- # waitforlisten 74029 /var/tmp/bdevperf.sock 00:17:24.385 18:15:42 -- common/autotest_common.sh@829 -- # '[' -z 74029 ']' 00:17:24.385 18:15:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.385 18:15:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.385 18:15:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.385 18:15:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.385 18:15:42 -- common/autotest_common.sh@10 -- # set +x 00:17:24.385 [2024-11-18 18:15:42.615023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.385 [2024-11-18 18:15:42.615349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:17:24.385 [2024-11-18 18:15:42.755301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.385 [2024-11-18 18:15:42.809471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.320 18:15:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.320 18:15:43 -- common/autotest_common.sh@862 -- # return 0 00:17:25.320 18:15:43 -- host/timeout.sh@116 -- # dtrace_pid=74045 00:17:25.320 18:15:43 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 74029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:17:25.320 18:15:43 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:17:25.320 18:15:43 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:25.578 NVMe0n1 00:17:25.578 18:15:44 -- host/timeout.sh@124 -- # rpc_pid=74081 00:17:25.578 18:15:44 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.578 18:15:44 -- host/timeout.sh@125 -- # sleep 1 00:17:25.837 Running I/O for 10 seconds... 00:17:26.776 18:15:45 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.776 [2024-11-18 18:15:45.366250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.776 [2024-11-18 18:15:45.366343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.776 [2024-11-18 18:15:45.366367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.776 [2024-11-18 18:15:45.366386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.776 [2024-11-18 18:15:45.366406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852010 is same with the state(5) to be set 00:17:26.776 [2024-11-18 18:15:45.366678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.366986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.366996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.367007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.367015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.776 [2024-11-18 18:15:45.367026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.776 [2024-11-18 18:15:45.367034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.777 [2024-11-18 18:15:45.367850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.777 [2024-11-18 18:15:45.367861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.367993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.778 [2024-11-18 18:15:45.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.778 [2024-11-18 18:15:45.368628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.368988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.779 [2024-11-18 18:15:45.369241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b50c0 is same with the state(5) to be set 00:17:26.779 [2024-11-18 18:15:45.369262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:26.779 [2024-11-18 18:15:45.369270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:26.779 [2024-11-18 18:15:45.369278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:17:26.779 [2024-11-18 18:15:45.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.779 [2024-11-18 18:15:45.369329] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b50c0 was disconnected and freed. reset controller. 00:17:26.779 [2024-11-18 18:15:45.369600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:26.779 [2024-11-18 18:15:45.369650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852010 (9): Bad file descriptor 00:17:26.779 [2024-11-18 18:15:45.369752] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:26.779 [2024-11-18 18:15:45.369831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:26.779 [2024-11-18 18:15:45.369877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:26.779 [2024-11-18 18:15:45.369893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x852010 with addr=10.0.0.2, port=4420 00:17:26.779 [2024-11-18 18:15:45.369904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852010 is same with the state(5) to be set 00:17:26.779 [2024-11-18 18:15:45.369924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852010 (9): Bad file descriptor 00:17:26.779 [2024-11-18 18:15:45.369946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:26.779 [2024-11-18 18:15:45.369956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:26.779 [2024-11-18 18:15:45.369967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:26.779 [2024-11-18 18:15:45.369987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:26.779 [2024-11-18 18:15:45.369997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:27.072 18:15:45 -- host/timeout.sh@128 -- # wait 74081 00:17:28.981 [2024-11-18 18:15:47.370171] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.981 [2024-11-18 18:15:47.370344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.981 [2024-11-18 18:15:47.370391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.981 [2024-11-18 18:15:47.370408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x852010 with addr=10.0.0.2, port=4420 00:17:28.981 [2024-11-18 18:15:47.370421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852010 is same with the state(5) to be set 00:17:28.981 [2024-11-18 18:15:47.370450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852010 (9): Bad file descriptor 00:17:28.981 [2024-11-18 18:15:47.370469] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:28.981 [2024-11-18 18:15:47.370479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:28.981 [2024-11-18 18:15:47.370489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.981 [2024-11-18 18:15:47.370518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:28.981 [2024-11-18 18:15:47.370530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:30.884 [2024-11-18 18:15:49.370730] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.884 [2024-11-18 18:15:49.370853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.884 [2024-11-18 18:15:49.370897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.884 [2024-11-18 18:15:49.370925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x852010 with addr=10.0.0.2, port=4420 00:17:30.884 [2024-11-18 18:15:49.370937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x852010 is same with the state(5) to be set 00:17:30.884 [2024-11-18 18:15:49.370963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x852010 (9): Bad file descriptor 00:17:30.884 [2024-11-18 18:15:49.370980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:30.884 [2024-11-18 18:15:49.370990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:30.884 [2024-11-18 18:15:49.371001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:30.884 [2024-11-18 18:15:49.371032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.884 [2024-11-18 18:15:49.371043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.788 [2024-11-18 18:15:51.371109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:32.788 [2024-11-18 18:15:51.371171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:32.788 [2024-11-18 18:15:51.371197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:32.788 [2024-11-18 18:15:51.371208] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:32.788 [2024-11-18 18:15:51.371235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:34.163 00:17:34.164 Latency(us) 00:17:34.164 [2024-11-18T18:15:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.164 [2024-11-18T18:15:52.768Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:17:34.164 NVMe0n1 : 8.12 2282.38 8.92 15.76 0.00 55654.14 7089.80 7015926.69 00:17:34.164 [2024-11-18T18:15:52.768Z] =================================================================================================================== 00:17:34.164 [2024-11-18T18:15:52.768Z] Total : 2282.38 8.92 15.76 0.00 55654.14 7089.80 7015926.69 00:17:34.164 0 00:17:34.164 18:15:52 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.164 Attaching 5 probes... 00:17:34.164 1262.837332: reset bdev controller NVMe0 00:17:34.164 1262.934799: reconnect bdev controller NVMe0 00:17:34.164 3263.274851: reconnect delay bdev controller NVMe0 00:17:34.164 3263.314011: reconnect bdev controller NVMe0 00:17:34.164 5263.807524: reconnect delay bdev controller NVMe0 00:17:34.164 5263.848993: reconnect bdev controller NVMe0 00:17:34.164 7264.319003: reconnect delay bdev controller NVMe0 00:17:34.164 7264.336624: reconnect bdev controller NVMe0 00:17:34.164 18:15:52 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:17:34.164 18:15:52 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:17:34.164 18:15:52 -- host/timeout.sh@136 -- # kill 74045 00:17:34.164 18:15:52 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.164 18:15:52 -- host/timeout.sh@139 -- # killprocess 74029 00:17:34.164 18:15:52 -- common/autotest_common.sh@936 -- # '[' -z 74029 ']' 00:17:34.164 18:15:52 -- common/autotest_common.sh@940 -- # kill -0 74029 00:17:34.164 18:15:52 -- common/autotest_common.sh@941 -- # uname 00:17:34.164 18:15:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.164 18:15:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74029 00:17:34.164 18:15:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:34.164 18:15:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:34.164 killing process with pid 74029 00:17:34.164 18:15:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74029' 00:17:34.164 18:15:52 -- common/autotest_common.sh@955 -- # kill 74029 00:17:34.164 Received shutdown signal, test time was about 8.192986 seconds 00:17:34.164 00:17:34.164 Latency(us) 00:17:34.164 [2024-11-18T18:15:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.164 [2024-11-18T18:15:52.768Z] =================================================================================================================== 00:17:34.164 [2024-11-18T18:15:52.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.164 18:15:52 -- common/autotest_common.sh@960 -- # wait 74029 00:17:34.164 18:15:52 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.422 18:15:52 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:17:34.422 18:15:52 -- host/timeout.sh@145 -- # nvmftestfini 00:17:34.422 18:15:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:34.422 18:15:52 -- nvmf/common.sh@116 -- # sync 00:17:34.422 18:15:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:34.422 18:15:52 -- nvmf/common.sh@119 -- # set +e 00:17:34.422 18:15:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:34.422 18:15:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:34.422 rmmod nvme_tcp 00:17:34.422 rmmod nvme_fabrics 00:17:34.422 rmmod nvme_keyring 00:17:34.422 18:15:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:34.422 18:15:52 -- nvmf/common.sh@123 -- # set -e 00:17:34.422 18:15:52 -- nvmf/common.sh@124 -- # return 0 00:17:34.422 18:15:52 -- nvmf/common.sh@477 -- # '[' -n 73579 ']' 00:17:34.422 18:15:52 -- nvmf/common.sh@478 -- # killprocess 73579 00:17:34.422 18:15:52 -- common/autotest_common.sh@936 -- # '[' -z 73579 ']' 00:17:34.422 18:15:52 -- common/autotest_common.sh@940 -- # kill -0 73579 00:17:34.422 18:15:52 -- common/autotest_common.sh@941 -- # uname 00:17:34.422 18:15:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.422 18:15:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73579 00:17:34.422 18:15:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.422 killing process with pid 73579 00:17:34.423 18:15:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.423 18:15:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73579' 00:17:34.423 18:15:53 -- common/autotest_common.sh@955 -- # kill 73579 00:17:34.423 18:15:53 -- common/autotest_common.sh@960 -- # wait 73579 00:17:34.681 18:15:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:34.681 18:15:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:34.681 18:15:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:34.681 18:15:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.681 18:15:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:34.681 18:15:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.681 18:15:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.681 18:15:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.681 18:15:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:34.681 ************************************ 00:17:34.681 END TEST nvmf_timeout 00:17:34.681 ************************************ 00:17:34.681 00:17:34.681 real 0m47.055s 00:17:34.681 user 2m18.578s 00:17:34.681 sys 0m5.382s 00:17:34.681 18:15:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.681 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:34.681 18:15:53 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:17:34.681 18:15:53 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:17:34.681 18:15:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.681 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 18:15:53 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:17:34.941 00:17:34.941 real 10m34.963s 00:17:34.941 user 29m38.641s 00:17:34.941 sys 3m18.880s 00:17:34.941 18:15:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.941 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 ************************************ 00:17:34.941 END TEST nvmf_tcp 00:17:34.941 ************************************ 00:17:34.941 18:15:53 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:17:34.941 18:15:53 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:34.941 18:15:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:34.941 18:15:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.941 18:15:53 -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 ************************************ 00:17:34.941 START TEST nvmf_dif 00:17:34.941 ************************************ 00:17:34.941 18:15:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:17:34.941 * Looking for test storage... 00:17:34.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.941 18:15:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:34.941 18:15:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:34.941 18:15:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:34.941 18:15:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:34.941 18:15:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:34.941 18:15:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.941 18:15:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.941 18:15:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.941 18:15:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.941 18:15:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.941 18:15:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.941 18:15:53 -- scripts/common.sh@337 -- # local 'op=<' 00:17:34.941 18:15:53 -- scripts/common.sh@339 -- # ver1_l=2 00:17:34.941 18:15:53 -- scripts/common.sh@340 -- # ver2_l=1 00:17:34.941 18:15:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.941 18:15:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.941 18:15:53 -- scripts/common.sh@344 -- # : 1 00:17:34.941 18:15:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.941 18:15:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.941 18:15:53 -- scripts/common.sh@364 -- # decimal 1 00:17:34.941 18:15:53 -- scripts/common.sh@352 -- # local d=1 00:17:34.941 18:15:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.941 18:15:53 -- scripts/common.sh@354 -- # echo 1 00:17:34.941 18:15:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.941 18:15:53 -- scripts/common.sh@365 -- # decimal 2 00:17:34.941 18:15:53 -- scripts/common.sh@352 -- # local d=2 00:17:34.941 18:15:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.941 18:15:53 -- scripts/common.sh@354 -- # echo 2 00:17:34.941 18:15:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:34.941 18:15:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.941 18:15:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.941 18:15:53 -- scripts/common.sh@367 -- # return 0 00:17:34.941 18:15:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:35.201 18:15:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.201 --rc genhtml_branch_coverage=1 00:17:35.201 --rc genhtml_function_coverage=1 00:17:35.201 --rc genhtml_legend=1 00:17:35.201 --rc geninfo_all_blocks=1 00:17:35.201 --rc geninfo_unexecuted_blocks=1 00:17:35.201 00:17:35.201 ' 00:17:35.201 18:15:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.201 --rc genhtml_branch_coverage=1 00:17:35.201 --rc genhtml_function_coverage=1 00:17:35.201 --rc genhtml_legend=1 00:17:35.201 --rc geninfo_all_blocks=1 00:17:35.201 --rc geninfo_unexecuted_blocks=1 00:17:35.201 00:17:35.201 ' 00:17:35.201 18:15:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.201 --rc genhtml_branch_coverage=1 00:17:35.201 --rc genhtml_function_coverage=1 00:17:35.201 --rc genhtml_legend=1 00:17:35.201 --rc geninfo_all_blocks=1 00:17:35.201 --rc geninfo_unexecuted_blocks=1 00:17:35.201 00:17:35.201 ' 00:17:35.201 18:15:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:35.201 --rc genhtml_branch_coverage=1 00:17:35.201 --rc genhtml_function_coverage=1 00:17:35.201 --rc genhtml_legend=1 00:17:35.201 --rc geninfo_all_blocks=1 00:17:35.201 --rc geninfo_unexecuted_blocks=1 00:17:35.201 00:17:35.201 ' 00:17:35.201 18:15:53 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.201 18:15:53 -- nvmf/common.sh@7 -- # uname -s 00:17:35.201 18:15:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.201 18:15:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.201 18:15:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.201 18:15:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.201 18:15:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.201 18:15:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.201 18:15:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.201 18:15:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.201 18:15:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.201 18:15:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:17:35.201 18:15:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:17:35.201 18:15:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.201 18:15:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.201 18:15:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.201 18:15:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.201 18:15:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.201 18:15:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.201 18:15:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.201 18:15:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.201 18:15:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.201 18:15:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.201 18:15:53 -- paths/export.sh@5 -- # export PATH 00:17:35.201 18:15:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.201 18:15:53 -- nvmf/common.sh@46 -- # : 0 00:17:35.201 18:15:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:35.201 18:15:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:35.201 18:15:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:35.201 18:15:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.201 18:15:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.201 18:15:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:35.201 18:15:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:35.201 18:15:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:35.201 18:15:53 -- target/dif.sh@15 -- # NULL_META=16 00:17:35.201 18:15:53 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:17:35.201 18:15:53 -- target/dif.sh@15 -- # NULL_SIZE=64 00:17:35.201 18:15:53 -- target/dif.sh@15 -- # NULL_DIF=1 00:17:35.201 18:15:53 -- target/dif.sh@135 -- # nvmftestinit 00:17:35.201 18:15:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:35.201 18:15:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.201 18:15:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:35.201 18:15:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:35.201 18:15:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:35.201 18:15:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.201 18:15:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:35.201 18:15:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.201 18:15:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:35.201 18:15:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:35.201 18:15:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.201 18:15:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.201 18:15:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.201 18:15:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:35.201 18:15:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.201 18:15:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.201 18:15:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.201 18:15:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.201 18:15:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.201 18:15:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.201 18:15:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.201 18:15:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.201 18:15:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:35.201 18:15:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:35.201 Cannot find device "nvmf_tgt_br" 00:17:35.201 18:15:53 -- nvmf/common.sh@154 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.201 Cannot find device "nvmf_tgt_br2" 00:17:35.201 18:15:53 -- nvmf/common.sh@155 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:35.201 18:15:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:35.201 Cannot find device "nvmf_tgt_br" 00:17:35.201 18:15:53 -- nvmf/common.sh@157 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:35.201 Cannot find device "nvmf_tgt_br2" 00:17:35.201 18:15:53 -- nvmf/common.sh@158 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:35.201 18:15:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:35.201 18:15:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.201 18:15:53 -- nvmf/common.sh@161 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.201 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.201 18:15:53 -- nvmf/common.sh@162 -- # true 00:17:35.201 18:15:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.201 18:15:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.201 18:15:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.202 18:15:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.202 18:15:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.202 18:15:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.202 18:15:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.202 18:15:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.202 18:15:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.202 18:15:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:35.202 18:15:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:35.202 18:15:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:35.202 18:15:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:35.202 18:15:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.460 18:15:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.460 18:15:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.460 18:15:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:35.460 18:15:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:35.460 18:15:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.460 18:15:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.460 18:15:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.460 18:15:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.460 18:15:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.460 18:15:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:35.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:17:35.460 00:17:35.460 --- 10.0.0.2 ping statistics --- 00:17:35.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.460 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:35.460 18:15:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:35.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:35.460 00:17:35.460 --- 10.0.0.3 ping statistics --- 00:17:35.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.460 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:35.460 18:15:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:35.460 00:17:35.460 --- 10.0.0.1 ping statistics --- 00:17:35.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.460 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:35.460 18:15:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.460 18:15:53 -- nvmf/common.sh@421 -- # return 0 00:17:35.460 18:15:53 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:17:35.460 18:15:53 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.719 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:35.719 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:35.719 18:15:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.719 18:15:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:35.719 18:15:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:35.719 18:15:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.719 18:15:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:35.719 18:15:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:35.978 18:15:54 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:17:35.978 18:15:54 -- target/dif.sh@137 -- # nvmfappstart 00:17:35.978 18:15:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.978 18:15:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.978 18:15:54 -- common/autotest_common.sh@10 -- # set +x 00:17:35.978 18:15:54 -- nvmf/common.sh@469 -- # nvmfpid=74534 00:17:35.978 18:15:54 -- nvmf/common.sh@470 -- # waitforlisten 74534 00:17:35.978 18:15:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:35.978 18:15:54 -- common/autotest_common.sh@829 -- # '[' -z 74534 ']' 00:17:35.978 18:15:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.978 18:15:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.978 18:15:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.978 18:15:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.978 18:15:54 -- common/autotest_common.sh@10 -- # set +x 00:17:35.978 [2024-11-18 18:15:54.400370] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:35.978 [2024-11-18 18:15:54.400493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.978 [2024-11-18 18:15:54.540378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.238 [2024-11-18 18:15:54.609871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:36.238 [2024-11-18 18:15:54.610075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.238 [2024-11-18 18:15:54.610090] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.238 [2024-11-18 18:15:54.610100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.238 [2024-11-18 18:15:54.610137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.176 18:15:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.176 18:15:55 -- common/autotest_common.sh@862 -- # return 0 00:17:37.176 18:15:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:37.176 18:15:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 18:15:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.176 18:15:55 -- target/dif.sh@139 -- # create_transport 00:17:37.176 18:15:55 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:17:37.176 18:15:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 [2024-11-18 18:15:55.482085] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.176 18:15:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.176 18:15:55 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:17:37.176 18:15:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:37.176 18:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 ************************************ 00:17:37.176 START TEST fio_dif_1_default 00:17:37.176 ************************************ 00:17:37.176 18:15:55 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:17:37.176 18:15:55 -- target/dif.sh@86 -- # create_subsystems 0 00:17:37.176 18:15:55 -- target/dif.sh@28 -- # local sub 00:17:37.176 18:15:55 -- target/dif.sh@30 -- # for sub in "$@" 00:17:37.176 18:15:55 -- target/dif.sh@31 -- # create_subsystem 0 00:17:37.176 18:15:55 -- target/dif.sh@18 -- # local sub_id=0 00:17:37.176 18:15:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:37.176 18:15:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 bdev_null0 00:17:37.176 18:15:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.176 18:15:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:37.176 18:15:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 18:15:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.176 18:15:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:37.176 18:15:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 18:15:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.176 18:15:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:37.176 18:15:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.176 18:15:55 -- common/autotest_common.sh@10 -- # set +x 00:17:37.176 [2024-11-18 18:15:55.526221] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.176 18:15:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.176 18:15:55 -- target/dif.sh@87 -- # fio /dev/fd/62 00:17:37.176 18:15:55 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:17:37.176 18:15:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:37.176 18:15:55 -- nvmf/common.sh@520 -- # config=() 00:17:37.176 18:15:55 -- nvmf/common.sh@520 -- # local subsystem config 00:17:37.176 18:15:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:37.176 18:15:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.176 18:15:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:37.176 { 00:17:37.176 "params": { 00:17:37.176 "name": "Nvme$subsystem", 00:17:37.176 "trtype": "$TEST_TRANSPORT", 00:17:37.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.176 "adrfam": "ipv4", 00:17:37.176 "trsvcid": "$NVMF_PORT", 00:17:37.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.176 "hdgst": ${hdgst:-false}, 00:17:37.176 "ddgst": ${ddgst:-false} 00:17:37.176 }, 00:17:37.176 "method": "bdev_nvme_attach_controller" 00:17:37.176 } 00:17:37.176 EOF 00:17:37.176 )") 00:17:37.176 18:15:55 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.176 18:15:55 -- target/dif.sh@82 -- # gen_fio_conf 00:17:37.176 18:15:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:37.176 18:15:55 -- target/dif.sh@54 -- # local file 00:17:37.176 18:15:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:37.176 18:15:55 -- target/dif.sh@56 -- # cat 00:17:37.176 18:15:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:37.176 18:15:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.176 18:15:55 -- nvmf/common.sh@542 -- # cat 00:17:37.176 18:15:55 -- common/autotest_common.sh@1330 -- # shift 00:17:37.176 18:15:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:37.176 18:15:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:37.176 18:15:55 -- nvmf/common.sh@544 -- # jq . 00:17:37.176 18:15:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:37.176 18:15:55 -- target/dif.sh@72 -- # (( file <= files )) 00:17:37.176 18:15:55 -- nvmf/common.sh@545 -- # IFS=, 00:17:37.176 18:15:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:37.176 "params": { 00:17:37.176 "name": "Nvme0", 00:17:37.176 "trtype": "tcp", 00:17:37.176 "traddr": "10.0.0.2", 00:17:37.176 "adrfam": "ipv4", 00:17:37.176 "trsvcid": "4420", 00:17:37.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:37.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:37.176 "hdgst": false, 00:17:37.176 "ddgst": false 00:17:37.176 }, 00:17:37.176 "method": "bdev_nvme_attach_controller" 00:17:37.176 }' 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:37.176 18:15:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:37.176 18:15:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:37.176 18:15:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:37.176 18:15:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:37.176 18:15:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:37.177 18:15:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:37.177 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:37.177 fio-3.35 00:17:37.177 Starting 1 thread 00:17:37.744 [2024-11-18 18:15:56.098193] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:37.744 [2024-11-18 18:15:56.098266] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:47.721 00:17:47.721 filename0: (groupid=0, jobs=1): err= 0: pid=74606: Mon Nov 18 18:16:06 2024 00:17:47.721 read: IOPS=9407, BW=36.7MiB/s (38.5MB/s)(368MiB/10001msec) 00:17:47.721 slat (usec): min=6, max=118, avg= 8.12, stdev= 3.59 00:17:47.721 clat (usec): min=325, max=4819, avg=401.24, stdev=53.71 00:17:47.721 lat (usec): min=331, max=4850, avg=409.36, stdev=54.44 00:17:47.721 clat percentiles (usec): 00:17:47.721 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 363], 00:17:47.721 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:17:47.721 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 461], 95.00th=[ 486], 00:17:47.721 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 586], 99.95th=[ 701], 00:17:47.721 | 99.99th=[ 1532] 00:17:47.721 bw ( KiB/s): min=36000, max=39488, per=100.00%, avg=37637.05, stdev=835.50, samples=19 00:17:47.721 iops : min= 9000, max= 9872, avg=9409.26, stdev=208.99, samples=19 00:17:47.721 lat (usec) : 500=97.31%, 750=2.66%, 1000=0.02% 00:17:47.721 lat (msec) : 2=0.01%, 10=0.01% 00:17:47.721 cpu : usr=85.28%, sys=12.81%, ctx=33, majf=0, minf=9 00:17:47.721 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:47.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.721 issued rwts: total=94084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.721 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:47.721 00:17:47.721 Run status group 0 (all jobs): 00:17:47.721 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=368MiB (385MB), run=10001-10001msec 00:17:47.981 18:16:06 -- target/dif.sh@88 -- # destroy_subsystems 0 00:17:47.981 18:16:06 -- target/dif.sh@43 -- # local sub 00:17:47.981 18:16:06 -- target/dif.sh@45 -- # for sub in "$@" 00:17:47.981 18:16:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:47.981 18:16:06 -- target/dif.sh@36 -- # local sub_id=0 00:17:47.981 18:16:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 ************************************ 00:17:47.981 END TEST fio_dif_1_default 00:17:47.981 ************************************ 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 00:17:47.981 real 0m10.895s 00:17:47.981 user 0m9.130s 00:17:47.981 sys 0m1.492s 00:17:47.981 18:16:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:17:47.981 18:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:47.981 18:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 ************************************ 00:17:47.981 START TEST fio_dif_1_multi_subsystems 00:17:47.981 ************************************ 00:17:47.981 18:16:06 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:17:47.981 18:16:06 -- target/dif.sh@92 -- # local files=1 00:17:47.981 18:16:06 -- target/dif.sh@94 -- # create_subsystems 0 1 00:17:47.981 18:16:06 -- target/dif.sh@28 -- # local sub 00:17:47.981 18:16:06 -- target/dif.sh@30 -- # for sub in "$@" 00:17:47.981 18:16:06 -- target/dif.sh@31 -- # create_subsystem 0 00:17:47.981 18:16:06 -- target/dif.sh@18 -- # local sub_id=0 00:17:47.981 18:16:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 bdev_null0 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 [2024-11-18 18:16:06.484491] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@30 -- # for sub in "$@" 00:17:47.981 18:16:06 -- target/dif.sh@31 -- # create_subsystem 1 00:17:47.981 18:16:06 -- target/dif.sh@18 -- # local sub_id=1 00:17:47.981 18:16:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 bdev_null1 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.981 18:16:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.981 18:16:06 -- common/autotest_common.sh@10 -- # set +x 00:17:47.981 18:16:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.981 18:16:06 -- target/dif.sh@95 -- # fio /dev/fd/62 00:17:47.981 18:16:06 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:17:47.981 18:16:06 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:17:47.981 18:16:06 -- nvmf/common.sh@520 -- # config=() 00:17:47.981 18:16:06 -- nvmf/common.sh@520 -- # local subsystem config 00:17:47.981 18:16:06 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:47.981 18:16:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:47.981 18:16:06 -- target/dif.sh@82 -- # gen_fio_conf 00:17:47.981 18:16:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:47.981 { 00:17:47.981 "params": { 00:17:47.981 "name": "Nvme$subsystem", 00:17:47.981 "trtype": "$TEST_TRANSPORT", 00:17:47.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.981 "adrfam": "ipv4", 00:17:47.981 "trsvcid": "$NVMF_PORT", 00:17:47.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.981 "hdgst": ${hdgst:-false}, 00:17:47.981 "ddgst": ${ddgst:-false} 00:17:47.981 }, 00:17:47.981 "method": "bdev_nvme_attach_controller" 00:17:47.981 } 00:17:47.981 EOF 00:17:47.981 )") 00:17:47.981 18:16:06 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:47.981 18:16:06 -- target/dif.sh@54 -- # local file 00:17:47.981 18:16:06 -- target/dif.sh@56 -- # cat 00:17:47.981 18:16:06 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:47.982 18:16:06 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:47.982 18:16:06 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:47.982 18:16:06 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:47.982 18:16:06 -- common/autotest_common.sh@1330 -- # shift 00:17:47.982 18:16:06 -- nvmf/common.sh@542 -- # cat 00:17:47.982 18:16:06 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:47.982 18:16:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.982 18:16:06 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:47.982 18:16:06 -- target/dif.sh@72 -- # (( file <= files )) 00:17:47.982 18:16:06 -- target/dif.sh@73 -- # cat 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:47.982 18:16:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:47.982 18:16:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:47.982 { 00:17:47.982 "params": { 00:17:47.982 "name": "Nvme$subsystem", 00:17:47.982 "trtype": "$TEST_TRANSPORT", 00:17:47.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.982 "adrfam": "ipv4", 00:17:47.982 "trsvcid": "$NVMF_PORT", 00:17:47.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.982 "hdgst": ${hdgst:-false}, 00:17:47.982 "ddgst": ${ddgst:-false} 00:17:47.982 }, 00:17:47.982 "method": "bdev_nvme_attach_controller" 00:17:47.982 } 00:17:47.982 EOF 00:17:47.982 )") 00:17:47.982 18:16:06 -- target/dif.sh@72 -- # (( file++ )) 00:17:47.982 18:16:06 -- target/dif.sh@72 -- # (( file <= files )) 00:17:47.982 18:16:06 -- nvmf/common.sh@542 -- # cat 00:17:47.982 18:16:06 -- nvmf/common.sh@544 -- # jq . 00:17:47.982 18:16:06 -- nvmf/common.sh@545 -- # IFS=, 00:17:47.982 18:16:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:47.982 "params": { 00:17:47.982 "name": "Nvme0", 00:17:47.982 "trtype": "tcp", 00:17:47.982 "traddr": "10.0.0.2", 00:17:47.982 "adrfam": "ipv4", 00:17:47.982 "trsvcid": "4420", 00:17:47.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:47.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:47.982 "hdgst": false, 00:17:47.982 "ddgst": false 00:17:47.982 }, 00:17:47.982 "method": "bdev_nvme_attach_controller" 00:17:47.982 },{ 00:17:47.982 "params": { 00:17:47.982 "name": "Nvme1", 00:17:47.982 "trtype": "tcp", 00:17:47.982 "traddr": "10.0.0.2", 00:17:47.982 "adrfam": "ipv4", 00:17:47.982 "trsvcid": "4420", 00:17:47.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.982 "hdgst": false, 00:17:47.982 "ddgst": false 00:17:47.982 }, 00:17:47.982 "method": "bdev_nvme_attach_controller" 00:17:47.982 }' 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:47.982 18:16:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:47.982 18:16:06 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:47.982 18:16:06 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:48.241 18:16:06 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:48.241 18:16:06 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:48.241 18:16:06 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.241 18:16:06 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:48.241 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:48.241 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:17:48.241 fio-3.35 00:17:48.241 Starting 2 threads 00:17:48.806 [2024-11-18 18:16:07.154381] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:48.806 [2024-11-18 18:16:07.154464] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:58.784 00:17:58.785 filename0: (groupid=0, jobs=1): err= 0: pid=74766: Mon Nov 18 18:16:17 2024 00:17:58.785 read: IOPS=5098, BW=19.9MiB/s (20.9MB/s)(199MiB/10001msec) 00:17:58.785 slat (usec): min=6, max=224, avg=13.17, stdev= 5.30 00:17:58.785 clat (usec): min=575, max=1763, avg=748.97, stdev=64.79 00:17:58.785 lat (usec): min=581, max=1809, avg=762.14, stdev=65.87 00:17:58.785 clat percentiles (usec): 00:17:58.785 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 693], 00:17:58.785 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:17:58.785 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 865], 00:17:58.785 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 1237], 00:17:58.785 | 99.99th=[ 1418] 00:17:58.785 bw ( KiB/s): min=19776, max=20876, per=49.99%, avg=20393.05, stdev=318.83, samples=19 00:17:58.785 iops : min= 4944, max= 5219, avg=5098.26, stdev=79.71, samples=19 00:17:58.785 lat (usec) : 750=55.33%, 1000=44.59% 00:17:58.785 lat (msec) : 2=0.08% 00:17:58.785 cpu : usr=89.56%, sys=8.98%, ctx=112, majf=0, minf=9 00:17:58.785 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.785 issued rwts: total=50988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.785 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:58.785 filename1: (groupid=0, jobs=1): err= 0: pid=74767: Mon Nov 18 18:16:17 2024 00:17:58.785 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(199MiB/10001msec) 00:17:58.785 slat (nsec): min=6258, max=70044, avg=13216.55, stdev=5112.29 00:17:58.785 clat (usec): min=414, max=1793, avg=747.48, stdev=59.42 00:17:58.785 lat (usec): min=421, max=1838, avg=760.70, stdev=60.24 00:17:58.785 clat percentiles (usec): 00:17:58.785 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 693], 00:17:58.785 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:17:58.785 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:17:58.785 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 996], 00:17:58.785 | 99.99th=[ 1369] 00:17:58.785 bw ( KiB/s): min=19776, max=20908, per=50.01%, avg=20398.11, stdev=319.33, samples=19 00:17:58.785 iops : min= 4944, max= 5227, avg=5099.53, stdev=79.83, samples=19 00:17:58.785 lat (usec) : 500=0.02%, 750=57.20%, 1000=42.73% 00:17:58.785 lat (msec) : 2=0.05% 00:17:58.785 cpu : usr=90.40%, sys=8.15%, ctx=9, majf=0, minf=0 00:17:58.785 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.785 issued rwts: total=51000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.785 latency : target=0, window=0, percentile=100.00%, depth=4 00:17:58.785 00:17:58.785 Run status group 0 (all jobs): 00:17:58.785 READ: bw=39.8MiB/s (41.8MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=398MiB (418MB), run=10001-10001msec 00:17:59.044 18:16:17 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:17:59.044 18:16:17 -- target/dif.sh@43 -- # local sub 00:17:59.044 18:16:17 -- target/dif.sh@45 -- # for sub in "$@" 00:17:59.044 18:16:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:17:59.044 18:16:17 -- target/dif.sh@36 -- # local sub_id=0 00:17:59.044 18:16:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@45 -- # for sub in "$@" 00:17:59.044 18:16:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:17:59.044 18:16:17 -- target/dif.sh@36 -- # local sub_id=1 00:17:59.044 18:16:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 ************************************ 00:17:59.044 END TEST fio_dif_1_multi_subsystems 00:17:59.044 ************************************ 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 00:17:59.044 real 0m11.027s 00:17:59.044 user 0m18.688s 00:17:59.044 sys 0m1.951s 00:17:59.044 18:16:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:17:59.044 18:16:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:59.044 18:16:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 ************************************ 00:17:59.044 START TEST fio_dif_rand_params 00:17:59.044 ************************************ 00:17:59.044 18:16:17 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:17:59.044 18:16:17 -- target/dif.sh@100 -- # local NULL_DIF 00:17:59.044 18:16:17 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:17:59.044 18:16:17 -- target/dif.sh@103 -- # NULL_DIF=3 00:17:59.044 18:16:17 -- target/dif.sh@103 -- # bs=128k 00:17:59.044 18:16:17 -- target/dif.sh@103 -- # numjobs=3 00:17:59.044 18:16:17 -- target/dif.sh@103 -- # iodepth=3 00:17:59.044 18:16:17 -- target/dif.sh@103 -- # runtime=5 00:17:59.044 18:16:17 -- target/dif.sh@105 -- # create_subsystems 0 00:17:59.044 18:16:17 -- target/dif.sh@28 -- # local sub 00:17:59.044 18:16:17 -- target/dif.sh@30 -- # for sub in "$@" 00:17:59.044 18:16:17 -- target/dif.sh@31 -- # create_subsystem 0 00:17:59.044 18:16:17 -- target/dif.sh@18 -- # local sub_id=0 00:17:59.044 18:16:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 bdev_null0 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:59.044 18:16:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.044 18:16:17 -- common/autotest_common.sh@10 -- # set +x 00:17:59.044 [2024-11-18 18:16:17.563745] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.044 18:16:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.044 18:16:17 -- target/dif.sh@106 -- # fio /dev/fd/62 00:17:59.044 18:16:17 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:17:59.044 18:16:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:17:59.044 18:16:17 -- nvmf/common.sh@520 -- # config=() 00:17:59.044 18:16:17 -- nvmf/common.sh@520 -- # local subsystem config 00:17:59.044 18:16:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:59.044 18:16:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:59.044 18:16:17 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:59.044 18:16:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:59.044 { 00:17:59.044 "params": { 00:17:59.044 "name": "Nvme$subsystem", 00:17:59.044 "trtype": "$TEST_TRANSPORT", 00:17:59.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.044 "adrfam": "ipv4", 00:17:59.044 "trsvcid": "$NVMF_PORT", 00:17:59.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.044 "hdgst": ${hdgst:-false}, 00:17:59.044 "ddgst": ${ddgst:-false} 00:17:59.044 }, 00:17:59.044 "method": "bdev_nvme_attach_controller" 00:17:59.044 } 00:17:59.044 EOF 00:17:59.044 )") 00:17:59.044 18:16:17 -- target/dif.sh@82 -- # gen_fio_conf 00:17:59.044 18:16:17 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:17:59.044 18:16:17 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:59.044 18:16:17 -- target/dif.sh@54 -- # local file 00:17:59.044 18:16:17 -- common/autotest_common.sh@1328 -- # local sanitizers 00:17:59.044 18:16:17 -- target/dif.sh@56 -- # cat 00:17:59.044 18:16:17 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:59.044 18:16:17 -- common/autotest_common.sh@1330 -- # shift 00:17:59.044 18:16:17 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:17:59.044 18:16:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:59.044 18:16:17 -- nvmf/common.sh@542 -- # cat 00:17:59.044 18:16:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # grep libasan 00:17:59.044 18:16:17 -- target/dif.sh@72 -- # (( file <= files )) 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:59.044 18:16:17 -- nvmf/common.sh@544 -- # jq . 00:17:59.044 18:16:17 -- nvmf/common.sh@545 -- # IFS=, 00:17:59.044 18:16:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:59.044 "params": { 00:17:59.044 "name": "Nvme0", 00:17:59.044 "trtype": "tcp", 00:17:59.044 "traddr": "10.0.0.2", 00:17:59.044 "adrfam": "ipv4", 00:17:59.044 "trsvcid": "4420", 00:17:59.044 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:59.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:59.044 "hdgst": false, 00:17:59.044 "ddgst": false 00:17:59.044 }, 00:17:59.044 "method": "bdev_nvme_attach_controller" 00:17:59.044 }' 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:59.044 18:16:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:59.044 18:16:17 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:17:59.044 18:16:17 -- common/autotest_common.sh@1334 -- # asan_lib= 00:17:59.044 18:16:17 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:17:59.045 18:16:17 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:59.045 18:16:17 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:17:59.303 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:17:59.303 ... 00:17:59.303 fio-3.35 00:17:59.304 Starting 3 threads 00:17:59.563 [2024-11-18 18:16:18.151734] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:59.563 [2024-11-18 18:16:18.151814] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:04.839 00:18:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=74920: Mon Nov 18 18:16:23 2024 00:18:04.839 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5008msec) 00:18:04.839 slat (nsec): min=6675, max=50854, avg=10519.63, stdev=5063.41 00:18:04.839 clat (usec): min=8251, max=12545, avg=10742.99, stdev=577.18 00:18:04.839 lat (usec): min=8258, max=12561, avg=10753.51, stdev=577.82 00:18:04.839 clat percentiles (usec): 00:18:04.839 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:18:04.839 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:18:04.839 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:18:04.839 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12518], 99.95th=[12518], 00:18:04.839 | 99.99th=[12518] 00:18:04.839 bw ( KiB/s): min=34560, max=36864, per=33.35%, avg=35642.10, stdev=815.82, samples=10 00:18:04.839 iops : min= 270, max= 288, avg=278.40, stdev= 6.45, samples=10 00:18:04.839 lat (msec) : 10=8.60%, 20=91.40% 00:18:04.839 cpu : usr=91.47%, sys=7.87%, ctx=52, majf=0, minf=9 00:18:04.839 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.839 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=74921: Mon Nov 18 18:16:23 2024 00:18:04.839 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5009msec) 00:18:04.839 slat (nsec): min=4640, max=53530, avg=9552.23, stdev=4099.41 00:18:04.839 clat (usec): min=8140, max=13074, avg=10746.70, stdev=599.96 00:18:04.839 lat (usec): min=8147, max=13090, avg=10756.26, stdev=600.38 00:18:04.839 clat percentiles (usec): 00:18:04.839 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:18:04.839 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:18:04.839 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:18:04.839 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13042], 99.95th=[13042], 00:18:04.839 | 99.99th=[13042] 00:18:04.839 bw ( KiB/s): min=34560, max=36864, per=33.35%, avg=35642.10, stdev=731.09, samples=10 00:18:04.839 iops : min= 270, max= 288, avg=278.40, stdev= 5.80, samples=10 00:18:04.839 lat (msec) : 10=7.38%, 20=92.62% 00:18:04.839 cpu : usr=91.97%, sys=7.13%, ctx=68, majf=0, minf=9 00:18:04.839 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.839 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:04.839 filename0: (groupid=0, jobs=1): err= 0: pid=74922: Mon Nov 18 18:16:23 2024 00:18:04.839 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5001msec) 00:18:04.839 slat (nsec): min=6581, max=41841, avg=9676.23, stdev=4165.67 00:18:04.839 clat (usec): min=9838, max=12492, avg=10753.47, stdev=561.86 00:18:04.839 lat (usec): min=9845, max=12516, avg=10763.15, stdev=562.42 00:18:04.839 clat percentiles (usec): 00:18:04.839 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:18:04.839 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:18:04.839 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:18:04.839 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12518], 99.95th=[12518], 00:18:04.839 | 99.99th=[12518] 00:18:04.839 bw ( KiB/s): min=34560, max=36864, per=33.38%, avg=35669.33, stdev=778.59, samples=9 00:18:04.839 iops : min= 270, max= 288, avg=278.67, stdev= 6.08, samples=9 00:18:04.839 lat (msec) : 10=7.76%, 20=92.24% 00:18:04.839 cpu : usr=91.84%, sys=7.60%, ctx=7, majf=0, minf=0 00:18:04.839 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.839 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.839 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:04.839 00:18:04.839 Run status group 0 (all jobs): 00:18:04.839 READ: bw=104MiB/s (109MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=523MiB (548MB), run=5001-5009msec 00:18:05.099 18:16:23 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:05.099 18:16:23 -- target/dif.sh@43 -- # local sub 00:18:05.099 18:16:23 -- target/dif.sh@45 -- # for sub in "$@" 00:18:05.099 18:16:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:05.099 18:16:23 -- target/dif.sh@36 -- # local sub_id=0 00:18:05.099 18:16:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # bs=4k 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # numjobs=8 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # iodepth=16 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # runtime= 00:18:05.099 18:16:23 -- target/dif.sh@109 -- # files=2 00:18:05.099 18:16:23 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:05.099 18:16:23 -- target/dif.sh@28 -- # local sub 00:18:05.099 18:16:23 -- target/dif.sh@30 -- # for sub in "$@" 00:18:05.099 18:16:23 -- target/dif.sh@31 -- # create_subsystem 0 00:18:05.099 18:16:23 -- target/dif.sh@18 -- # local sub_id=0 00:18:05.099 18:16:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 bdev_null0 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:05.099 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.099 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.099 [2024-11-18 18:16:23.518426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.099 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.099 18:16:23 -- target/dif.sh@30 -- # for sub in "$@" 00:18:05.100 18:16:23 -- target/dif.sh@31 -- # create_subsystem 1 00:18:05.100 18:16:23 -- target/dif.sh@18 -- # local sub_id=1 00:18:05.100 18:16:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 bdev_null1 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@30 -- # for sub in "$@" 00:18:05.100 18:16:23 -- target/dif.sh@31 -- # create_subsystem 2 00:18:05.100 18:16:23 -- target/dif.sh@18 -- # local sub_id=2 00:18:05.100 18:16:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 bdev_null2 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:05.100 18:16:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.100 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:18:05.100 18:16:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.100 18:16:23 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:05.100 18:16:23 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:05.100 18:16:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:05.100 18:16:23 -- nvmf/common.sh@520 -- # config=() 00:18:05.100 18:16:23 -- nvmf/common.sh@520 -- # local subsystem config 00:18:05.100 18:16:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:05.100 18:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.100 { 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme$subsystem", 00:18:05.100 "trtype": "$TEST_TRANSPORT", 00:18:05.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "$NVMF_PORT", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.100 "hdgst": ${hdgst:-false}, 00:18:05.100 "ddgst": ${ddgst:-false} 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 } 00:18:05.100 EOF 00:18:05.100 )") 00:18:05.100 18:16:23 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:05.100 18:16:23 -- target/dif.sh@82 -- # gen_fio_conf 00:18:05.100 18:16:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:05.100 18:16:23 -- target/dif.sh@54 -- # local file 00:18:05.100 18:16:23 -- target/dif.sh@56 -- # cat 00:18:05.100 18:16:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.100 18:16:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:05.100 18:16:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.100 18:16:23 -- common/autotest_common.sh@1330 -- # shift 00:18:05.100 18:16:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:05.100 18:16:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # cat 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file <= files )) 00:18:05.100 18:16:23 -- target/dif.sh@73 -- # cat 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:05.100 18:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.100 { 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme$subsystem", 00:18:05.100 "trtype": "$TEST_TRANSPORT", 00:18:05.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "$NVMF_PORT", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.100 "hdgst": ${hdgst:-false}, 00:18:05.100 "ddgst": ${ddgst:-false} 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 } 00:18:05.100 EOF 00:18:05.100 )") 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file++ )) 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file <= files )) 00:18:05.100 18:16:23 -- target/dif.sh@73 -- # cat 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # cat 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file++ )) 00:18:05.100 18:16:23 -- target/dif.sh@72 -- # (( file <= files )) 00:18:05.100 18:16:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.100 { 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme$subsystem", 00:18:05.100 "trtype": "$TEST_TRANSPORT", 00:18:05.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "$NVMF_PORT", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.100 "hdgst": ${hdgst:-false}, 00:18:05.100 "ddgst": ${ddgst:-false} 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 } 00:18:05.100 EOF 00:18:05.100 )") 00:18:05.100 18:16:23 -- nvmf/common.sh@542 -- # cat 00:18:05.100 18:16:23 -- nvmf/common.sh@544 -- # jq . 00:18:05.100 18:16:23 -- nvmf/common.sh@545 -- # IFS=, 00:18:05.100 18:16:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme0", 00:18:05.100 "trtype": "tcp", 00:18:05.100 "traddr": "10.0.0.2", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "4420", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:05.100 "hdgst": false, 00:18:05.100 "ddgst": false 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 },{ 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme1", 00:18:05.100 "trtype": "tcp", 00:18:05.100 "traddr": "10.0.0.2", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "4420", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.100 "hdgst": false, 00:18:05.100 "ddgst": false 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 },{ 00:18:05.100 "params": { 00:18:05.100 "name": "Nvme2", 00:18:05.100 "trtype": "tcp", 00:18:05.100 "traddr": "10.0.0.2", 00:18:05.100 "adrfam": "ipv4", 00:18:05.100 "trsvcid": "4420", 00:18:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:05.100 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:05.100 "hdgst": false, 00:18:05.100 "ddgst": false 00:18:05.100 }, 00:18:05.100 "method": "bdev_nvme_attach_controller" 00:18:05.100 }' 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:05.100 18:16:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:05.100 18:16:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:05.100 18:16:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:05.100 18:16:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:05.100 18:16:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:05.100 18:16:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:05.360 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:05.360 ... 00:18:05.360 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:05.360 ... 00:18:05.360 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:05.360 ... 00:18:05.360 fio-3.35 00:18:05.360 Starting 24 threads 00:18:05.927 [2024-11-18 18:16:24.296797] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:05.927 [2024-11-18 18:16:24.297117] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:18.134 00:18:18.134 filename0: (groupid=0, jobs=1): err= 0: pid=75025: Mon Nov 18 18:16:34 2024 00:18:18.134 read: IOPS=225, BW=901KiB/s (922kB/s)(9052KiB/10052msec) 00:18:18.134 slat (usec): min=4, max=8034, avg=18.76, stdev=188.53 00:18:18.134 clat (usec): min=1527, max=135921, avg=70934.38, stdev=25233.04 00:18:18.134 lat (usec): min=1537, max=135935, avg=70953.15, stdev=25235.61 00:18:18.134 clat percentiles (usec): 00:18:18.134 | 1.00th=[ 1614], 5.00th=[ 10159], 10.00th=[ 45876], 20.00th=[ 55837], 00:18:18.134 | 30.00th=[ 63701], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 73925], 00:18:18.134 | 70.00th=[ 79168], 80.00th=[ 93848], 90.00th=[104334], 95.00th=[107480], 00:18:18.134 | 99.00th=[120062], 99.50th=[120062], 99.90th=[132645], 99.95th=[135267], 00:18:18.134 | 99.99th=[135267] 00:18:18.134 bw ( KiB/s): min= 664, max= 2023, per=4.25%, avg=898.35, stdev=285.45, samples=20 00:18:18.134 iops : min= 166, max= 505, avg=224.55, stdev=71.21, samples=20 00:18:18.134 lat (msec) : 2=2.83%, 4=2.12%, 20=1.41%, 50=10.65%, 100=69.33% 00:18:18.134 lat (msec) : 250=13.65% 00:18:18.134 cpu : usr=41.23%, sys=2.37%, ctx=1313, majf=0, minf=0 00:18:18.134 IO depths : 1=0.3%, 2=1.2%, 4=3.7%, 8=78.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:18.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.134 filename0: (groupid=0, jobs=1): err= 0: pid=75026: Mon Nov 18 18:16:34 2024 00:18:18.134 read: IOPS=219, BW=877KiB/s (898kB/s)(8788KiB/10016msec) 00:18:18.134 slat (usec): min=3, max=5758, avg=21.25, stdev=172.21 00:18:18.134 clat (msec): min=33, max=145, avg=72.76, stdev=22.39 00:18:18.134 lat (msec): min=33, max=145, avg=72.79, stdev=22.39 00:18:18.134 clat percentiles (msec): 00:18:18.134 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:18:18.134 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:18:18.134 | 70.00th=[ 79], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 114], 00:18:18.134 | 99.00th=[ 131], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 146], 00:18:18.134 | 99.99th=[ 146] 00:18:18.134 bw ( KiB/s): min= 512, max= 1024, per=4.14%, avg=874.35, stdev=149.08, samples=20 00:18:18.134 iops : min= 128, max= 256, avg=218.55, stdev=37.23, samples=20 00:18:18.134 lat (msec) : 50=19.21%, 100=65.59%, 250=15.20% 00:18:18.134 cpu : usr=43.01%, sys=2.20%, ctx=1413, majf=0, minf=9 00:18:18.134 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:18.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.134 filename0: (groupid=0, jobs=1): err= 0: pid=75027: Mon Nov 18 18:16:34 2024 00:18:18.134 read: IOPS=230, BW=921KiB/s (943kB/s)(9212KiB/10005msec) 00:18:18.134 slat (usec): min=3, max=8029, avg=34.23, stdev=382.16 00:18:18.134 clat (msec): min=8, max=128, avg=69.36, stdev=20.60 00:18:18.134 lat (msec): min=8, max=128, avg=69.40, stdev=20.59 00:18:18.134 clat percentiles (msec): 00:18:18.134 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:18:18.134 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:18:18.134 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 108], 00:18:18.134 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 129], 00:18:18.134 | 99.99th=[ 129] 00:18:18.134 bw ( KiB/s): min= 712, max= 1120, per=4.29%, avg=905.26, stdev=125.00, samples=19 00:18:18.134 iops : min= 178, max= 280, avg=226.32, stdev=31.25, samples=19 00:18:18.134 lat (msec) : 10=0.56%, 50=24.71%, 100=64.35%, 250=10.38% 00:18:18.134 cpu : usr=32.38%, sys=1.80%, ctx=896, majf=0, minf=9 00:18:18.134 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:18.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.134 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.134 filename0: (groupid=0, jobs=1): err= 0: pid=75028: Mon Nov 18 18:16:34 2024 00:18:18.134 read: IOPS=214, BW=857KiB/s (878kB/s)(8588KiB/10021msec) 00:18:18.134 slat (usec): min=4, max=8026, avg=25.76, stdev=299.32 00:18:18.134 clat (msec): min=30, max=145, avg=74.54, stdev=19.57 00:18:18.134 lat (msec): min=31, max=145, avg=74.57, stdev=19.56 00:18:18.134 clat percentiles (msec): 00:18:18.134 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:18:18.134 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.134 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:18:18.134 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 144], 00:18:18.134 | 99.99th=[ 146] 00:18:18.134 bw ( KiB/s): min= 648, max= 1000, per=4.04%, avg=852.30, stdev=106.54, samples=20 00:18:18.134 iops : min= 162, max= 250, avg=213.05, stdev=26.63, samples=20 00:18:18.134 lat (msec) : 50=14.90%, 100=72.71%, 250=12.39% 00:18:18.134 cpu : usr=33.75%, sys=1.60%, ctx=919, majf=0, minf=9 00:18:18.134 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:18.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.134 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename0: (groupid=0, jobs=1): err= 0: pid=75029: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10005msec) 00:18:18.135 slat (usec): min=4, max=8023, avg=21.00, stdev=188.65 00:18:18.135 clat (msec): min=4, max=144, avg=70.92, stdev=20.18 00:18:18.135 lat (msec): min=4, max=144, avg=70.94, stdev=20.18 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.135 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 108], 00:18:18.135 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 140], 99.95th=[ 140], 00:18:18.135 | 99.99th=[ 144] 00:18:18.135 bw ( KiB/s): min= 720, max= 1048, per=4.19%, avg=884.68, stdev=109.41, samples=19 00:18:18.135 iops : min= 180, max= 262, avg=221.16, stdev=27.35, samples=19 00:18:18.135 lat (msec) : 10=0.67%, 20=0.04%, 50=19.08%, 100=69.48%, 250=10.74% 00:18:18.135 cpu : usr=32.01%, sys=1.67%, ctx=883, majf=0, minf=9 00:18:18.135 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:18:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename0: (groupid=0, jobs=1): err= 0: pid=75030: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=224, BW=897KiB/s (919kB/s)(9004KiB/10037msec) 00:18:18.135 slat (usec): min=5, max=4025, avg=16.35, stdev=84.66 00:18:18.135 clat (msec): min=4, max=151, avg=71.25, stdev=21.16 00:18:18.135 lat (msec): min=4, max=152, avg=71.26, stdev=21.16 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:18:18.135 | 70.00th=[ 79], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:18:18.135 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 144], 00:18:18.135 | 99.99th=[ 153] 00:18:18.135 bw ( KiB/s): min= 656, max= 1280, per=4.23%, avg=894.00, stdev=145.97, samples=20 00:18:18.135 iops : min= 164, max= 320, avg=223.50, stdev=36.49, samples=20 00:18:18.135 lat (msec) : 10=0.09%, 20=1.24%, 50=17.19%, 100=67.97%, 250=13.51% 00:18:18.135 cpu : usr=41.88%, sys=2.34%, ctx=1308, majf=0, minf=9 00:18:18.135 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename0: (groupid=0, jobs=1): err= 0: pid=75031: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=215, BW=863KiB/s (883kB/s)(8628KiB/10003msec) 00:18:18.135 slat (usec): min=4, max=4025, avg=16.40, stdev=86.50 00:18:18.135 clat (msec): min=2, max=151, avg=74.12, stdev=26.87 00:18:18.135 lat (msec): min=2, max=151, avg=74.14, stdev=26.88 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 48], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.135 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 131], 00:18:18.135 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 153], 00:18:18.135 | 99.99th=[ 153] 00:18:18.135 bw ( KiB/s): min= 512, max= 1048, per=3.94%, avg=832.37, stdev=182.34, samples=19 00:18:18.135 iops : min= 128, max= 262, avg=208.05, stdev=45.54, samples=19 00:18:18.135 lat (msec) : 4=0.74%, 10=0.74%, 20=0.28%, 50=22.44%, 100=58.14% 00:18:18.135 lat (msec) : 250=17.66% 00:18:18.135 cpu : usr=33.61%, sys=1.54%, ctx=945, majf=0, minf=9 00:18:18.135 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:18:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename0: (groupid=0, jobs=1): err= 0: pid=75032: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=226, BW=907KiB/s (928kB/s)(9076KiB/10012msec) 00:18:18.135 slat (usec): min=3, max=8024, avg=28.00, stdev=274.19 00:18:18.135 clat (msec): min=35, max=144, avg=70.45, stdev=19.45 00:18:18.135 lat (msec): min=35, max=144, avg=70.48, stdev=19.44 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:18:18.135 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 108], 00:18:18.135 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 144], 00:18:18.135 | 99.99th=[ 144] 00:18:18.135 bw ( KiB/s): min= 720, max= 1104, per=4.28%, avg=903.50, stdev=109.96, samples=20 00:18:18.135 iops : min= 180, max= 276, avg=225.85, stdev=27.49, samples=20 00:18:18.135 lat (msec) : 50=19.13%, 100=70.21%, 250=10.67% 00:18:18.135 cpu : usr=37.66%, sys=1.92%, ctx=1203, majf=0, minf=9 00:18:18.135 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename1: (groupid=0, jobs=1): err= 0: pid=75033: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=223, BW=892KiB/s (914kB/s)(8928KiB/10007msec) 00:18:18.135 slat (usec): min=3, max=4027, avg=20.21, stdev=141.82 00:18:18.135 clat (msec): min=12, max=143, avg=71.63, stdev=20.25 00:18:18.135 lat (msec): min=12, max=143, avg=71.65, stdev=20.24 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:18:18.135 | 70.00th=[ 78], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 108], 00:18:18.135 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:18:18.135 | 99.99th=[ 144] 00:18:18.135 bw ( KiB/s): min= 672, max= 1128, per=4.21%, avg=888.80, stdev=121.30, samples=20 00:18:18.135 iops : min= 168, max= 282, avg=222.20, stdev=30.32, samples=20 00:18:18.135 lat (msec) : 20=0.31%, 50=17.65%, 100=68.46%, 250=13.58% 00:18:18.135 cpu : usr=39.57%, sys=1.90%, ctx=1269, majf=0, minf=9 00:18:18.135 IO depths : 1=0.1%, 2=0.9%, 4=3.2%, 8=80.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:18:18.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.135 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.135 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.135 filename1: (groupid=0, jobs=1): err= 0: pid=75034: Mon Nov 18 18:16:34 2024 00:18:18.135 read: IOPS=221, BW=887KiB/s (908kB/s)(8880KiB/10013msec) 00:18:18.135 slat (usec): min=3, max=8026, avg=22.57, stdev=219.18 00:18:18.135 clat (msec): min=31, max=132, avg=72.06, stdev=19.47 00:18:18.135 lat (msec): min=31, max=140, avg=72.09, stdev=19.49 00:18:18.135 clat percentiles (msec): 00:18:18.135 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:18:18.135 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.135 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 108], 00:18:18.135 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 131], 99.95th=[ 132], 00:18:18.135 | 99.99th=[ 133] 00:18:18.135 bw ( KiB/s): min= 712, max= 1088, per=4.18%, avg=883.80, stdev=111.21, samples=20 00:18:18.135 iops : min= 178, max= 272, avg=220.95, stdev=27.80, samples=20 00:18:18.135 lat (msec) : 50=16.62%, 100=72.34%, 250=11.04% 00:18:18.135 cpu : usr=33.48%, sys=1.67%, ctx=912, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75035: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=212, BW=852KiB/s (872kB/s)(8552KiB/10038msec) 00:18:18.136 slat (usec): min=5, max=8022, avg=18.95, stdev=193.71 00:18:18.136 clat (msec): min=11, max=155, avg=74.95, stdev=21.96 00:18:18.136 lat (msec): min=11, max=155, avg=74.97, stdev=21.97 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 14], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:18:18.136 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:18:18.136 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 110], 00:18:18.136 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:18:18.136 | 99.99th=[ 157] 00:18:18.136 bw ( KiB/s): min= 528, max= 1277, per=4.03%, avg=850.25, stdev=157.41, samples=20 00:18:18.136 iops : min= 132, max= 319, avg=212.55, stdev=39.32, samples=20 00:18:18.136 lat (msec) : 20=1.50%, 50=10.62%, 100=72.54%, 250=15.34% 00:18:18.136 cpu : usr=37.31%, sys=1.95%, ctx=1259, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=88.8%, 8=10.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75036: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=219, BW=877KiB/s (898kB/s)(8796KiB/10025msec) 00:18:18.136 slat (usec): min=3, max=8025, avg=21.68, stdev=207.81 00:18:18.136 clat (msec): min=31, max=139, avg=72.85, stdev=20.05 00:18:18.136 lat (msec): min=31, max=139, avg=72.88, stdev=20.04 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:18:18.136 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.136 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:18:18.136 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 136], 00:18:18.136 | 99.99th=[ 140] 00:18:18.136 bw ( KiB/s): min= 656, max= 1024, per=4.13%, avg=872.70, stdev=125.12, samples=20 00:18:18.136 iops : min= 164, max= 256, avg=218.15, stdev=31.27, samples=20 00:18:18.136 lat (msec) : 50=17.28%, 100=70.26%, 250=12.46% 00:18:18.136 cpu : usr=39.81%, sys=1.89%, ctx=1163, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75037: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=221, BW=886KiB/s (907kB/s)(8868KiB/10010msec) 00:18:18.136 slat (usec): min=4, max=8031, avg=33.22, stdev=380.16 00:18:18.136 clat (msec): min=35, max=140, avg=72.07, stdev=19.05 00:18:18.136 lat (msec): min=35, max=140, avg=72.10, stdev=19.04 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:18:18.136 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.136 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 108], 00:18:18.136 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 140], 00:18:18.136 | 99.99th=[ 140] 00:18:18.136 bw ( KiB/s): min= 712, max= 1016, per=4.17%, avg=881.60, stdev=92.04, samples=20 00:18:18.136 iops : min= 178, max= 254, avg=220.40, stdev=23.01, samples=20 00:18:18.136 lat (msec) : 50=16.78%, 100=71.85%, 250=11.37% 00:18:18.136 cpu : usr=33.56%, sys=1.87%, ctx=925, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75038: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=218, BW=874KiB/s (895kB/s)(8740KiB/10002msec) 00:18:18.136 slat (usec): min=3, max=8033, avg=28.78, stdev=313.12 00:18:18.136 clat (usec): min=1739, max=153998, avg=73101.31, stdev=26157.39 00:18:18.136 lat (usec): min=1748, max=154009, avg=73130.09, stdev=26155.60 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 6], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:18:18.136 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.136 | 70.00th=[ 83], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:18:18.136 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 155], 00:18:18.136 | 99.99th=[ 155] 00:18:18.136 bw ( KiB/s): min= 512, max= 1072, per=4.00%, avg=845.63, stdev=175.44, samples=19 00:18:18.136 iops : min= 128, max= 268, avg=211.37, stdev=43.82, samples=19 00:18:18.136 lat (msec) : 2=0.73%, 10=0.73%, 20=0.27%, 50=20.69%, 100=60.69% 00:18:18.136 lat (msec) : 250=16.89% 00:18:18.136 cpu : usr=36.84%, sys=1.96%, ctx=1069, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75039: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=226, BW=905KiB/s (926kB/s)(9080KiB/10037msec) 00:18:18.136 slat (usec): min=3, max=8022, avg=19.61, stdev=188.02 00:18:18.136 clat (msec): min=30, max=120, avg=70.63, stdev=19.70 00:18:18.136 lat (msec): min=30, max=121, avg=70.65, stdev=19.70 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:18:18.136 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:18:18.136 | 70.00th=[ 77], 80.00th=[ 89], 90.00th=[ 104], 95.00th=[ 107], 00:18:18.136 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:18:18.136 | 99.99th=[ 122] 00:18:18.136 bw ( KiB/s): min= 688, max= 1096, per=4.27%, avg=901.70, stdev=123.12, samples=20 00:18:18.136 iops : min= 172, max= 274, avg=225.40, stdev=30.75, samples=20 00:18:18.136 lat (msec) : 50=18.85%, 100=69.03%, 250=12.11% 00:18:18.136 cpu : usr=42.56%, sys=2.23%, ctx=1634, majf=0, minf=9 00:18:18.136 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:18.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.136 issued rwts: total=2270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.136 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.136 filename1: (groupid=0, jobs=1): err= 0: pid=75040: Mon Nov 18 18:16:34 2024 00:18:18.136 read: IOPS=221, BW=885KiB/s (906kB/s)(8848KiB/10001msec) 00:18:18.136 slat (usec): min=4, max=4030, avg=18.81, stdev=120.77 00:18:18.136 clat (msec): min=4, max=144, avg=72.25, stdev=22.65 00:18:18.136 lat (msec): min=4, max=144, avg=72.27, stdev=22.65 00:18:18.136 clat percentiles (msec): 00:18:18.136 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 50], 00:18:18.136 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.136 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:18:18.136 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 144], 00:18:18.136 | 99.99th=[ 144] 00:18:18.136 bw ( KiB/s): min= 544, max= 1080, per=4.08%, avg=862.26, stdev=161.02, samples=19 00:18:18.136 iops : min= 136, max= 270, avg=215.53, stdev=40.22, samples=19 00:18:18.136 lat (msec) : 10=0.99%, 50=20.57%, 100=62.93%, 250=15.51% 00:18:18.136 cpu : usr=41.42%, sys=2.13%, ctx=1237, majf=0, minf=10 00:18:18.137 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=88.4%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75041: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=219, BW=878KiB/s (899kB/s)(8792KiB/10010msec) 00:18:18.137 slat (usec): min=4, max=10023, avg=38.65, stdev=370.14 00:18:18.137 clat (msec): min=27, max=144, avg=72.69, stdev=20.33 00:18:18.137 lat (msec): min=27, max=144, avg=72.73, stdev=20.33 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:18:18.137 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.137 | 70.00th=[ 80], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 109], 00:18:18.137 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 144], 00:18:18.137 | 99.99th=[ 144] 00:18:18.137 bw ( KiB/s): min= 640, max= 1072, per=4.14%, avg=874.00, stdev=128.52, samples=20 00:18:18.137 iops : min= 160, max= 268, avg=218.50, stdev=32.13, samples=20 00:18:18.137 lat (msec) : 50=16.70%, 100=69.61%, 250=13.69% 00:18:18.137 cpu : usr=41.49%, sys=2.17%, ctx=1291, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75042: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=218, BW=876KiB/s (897kB/s)(8792KiB/10038msec) 00:18:18.137 slat (usec): min=3, max=8025, avg=30.22, stdev=352.04 00:18:18.137 clat (msec): min=11, max=155, avg=72.91, stdev=20.23 00:18:18.137 lat (msec): min=11, max=155, avg=72.94, stdev=20.24 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:18:18.137 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.137 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 108], 00:18:18.137 | 99.00th=[ 116], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:18:18.137 | 99.99th=[ 155] 00:18:18.137 bw ( KiB/s): min= 656, max= 1149, per=4.13%, avg=872.65, stdev=126.22, samples=20 00:18:18.137 iops : min= 164, max= 287, avg=218.15, stdev=31.53, samples=20 00:18:18.137 lat (msec) : 20=0.73%, 50=15.42%, 100=72.57%, 250=11.28% 00:18:18.137 cpu : usr=33.45%, sys=1.92%, ctx=905, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75043: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=215, BW=860KiB/s (881kB/s)(8620KiB/10020msec) 00:18:18.137 slat (usec): min=4, max=8024, avg=25.75, stdev=263.70 00:18:18.137 clat (msec): min=30, max=151, avg=74.28, stdev=19.97 00:18:18.137 lat (msec): min=30, max=151, avg=74.31, stdev=19.98 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:18:18.137 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:18:18.137 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:18:18.137 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 136], 00:18:18.137 | 99.99th=[ 153] 00:18:18.137 bw ( KiB/s): min= 640, max= 1024, per=4.05%, avg=855.50, stdev=121.30, samples=20 00:18:18.137 iops : min= 160, max= 256, avg=213.85, stdev=30.33, samples=20 00:18:18.137 lat (msec) : 50=17.40%, 100=69.88%, 250=12.71% 00:18:18.137 cpu : usr=38.37%, sys=1.89%, ctx=1125, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75044: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=216, BW=868KiB/s (889kB/s)(8712KiB/10037msec) 00:18:18.137 slat (usec): min=4, max=8029, avg=29.84, stdev=343.12 00:18:18.137 clat (msec): min=9, max=145, avg=73.46, stdev=20.36 00:18:18.137 lat (msec): min=9, max=145, avg=73.49, stdev=20.36 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 18], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:18:18.137 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.137 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:18:18.137 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:18:18.137 | 99.99th=[ 146] 00:18:18.137 bw ( KiB/s): min= 664, max= 1136, per=4.11%, avg=867.60, stdev=118.05, samples=20 00:18:18.137 iops : min= 166, max= 284, avg=216.90, stdev=29.51, samples=20 00:18:18.137 lat (msec) : 10=0.09%, 20=1.29%, 50=13.41%, 100=73.78%, 250=11.43% 00:18:18.137 cpu : usr=31.36%, sys=1.82%, ctx=888, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75045: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=219, BW=879KiB/s (900kB/s)(8828KiB/10040msec) 00:18:18.137 slat (usec): min=6, max=8024, avg=25.50, stdev=269.96 00:18:18.137 clat (msec): min=11, max=143, avg=72.64, stdev=20.59 00:18:18.137 lat (msec): min=11, max=143, avg=72.67, stdev=20.59 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 16], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:18:18.137 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:18:18.137 | 70.00th=[ 81], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:18:18.137 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 144], 00:18:18.137 | 99.99th=[ 144] 00:18:18.137 bw ( KiB/s): min= 632, max= 1085, per=4.15%, avg=876.25, stdev=119.55, samples=20 00:18:18.137 iops : min= 158, max= 271, avg=219.05, stdev=29.87, samples=20 00:18:18.137 lat (msec) : 20=1.45%, 50=15.09%, 100=70.64%, 250=12.82% 00:18:18.137 cpu : usr=37.74%, sys=1.78%, ctx=1081, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75046: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=227, BW=912KiB/s (934kB/s)(9128KiB/10009msec) 00:18:18.137 slat (usec): min=3, max=8027, avg=27.97, stdev=313.63 00:18:18.137 clat (msec): min=25, max=119, avg=70.07, stdev=19.74 00:18:18.137 lat (msec): min=25, max=119, avg=70.10, stdev=19.75 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:18:18.137 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:18:18.137 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 108], 00:18:18.137 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:18:18.137 | 99.99th=[ 121] 00:18:18.137 bw ( KiB/s): min= 712, max= 1104, per=4.30%, avg=907.60, stdev=115.16, samples=20 00:18:18.137 iops : min= 178, max= 276, avg=226.90, stdev=28.79, samples=20 00:18:18.137 lat (msec) : 50=23.53%, 100=66.08%, 250=10.39% 00:18:18.137 cpu : usr=32.75%, sys=1.68%, ctx=902, majf=0, minf=9 00:18:18.137 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.137 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.137 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.137 filename2: (groupid=0, jobs=1): err= 0: pid=75047: Mon Nov 18 18:16:34 2024 00:18:18.137 read: IOPS=207, BW=829KiB/s (849kB/s)(8320KiB/10032msec) 00:18:18.137 slat (usec): min=4, max=5036, avg=19.34, stdev=148.24 00:18:18.137 clat (msec): min=26, max=145, avg=77.05, stdev=24.71 00:18:18.137 lat (msec): min=26, max=145, avg=77.07, stdev=24.71 00:18:18.137 clat percentiles (msec): 00:18:18.137 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 54], 00:18:18.137 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:18:18.137 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 129], 00:18:18.138 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 146], 00:18:18.138 | 99.99th=[ 146] 00:18:18.138 bw ( KiB/s): min= 496, max= 1056, per=3.91%, avg=825.60, stdev=174.42, samples=20 00:18:18.138 iops : min= 124, max= 264, avg=206.40, stdev=43.60, samples=20 00:18:18.138 lat (msec) : 50=15.72%, 100=63.03%, 250=21.25% 00:18:18.138 cpu : usr=41.47%, sys=2.44%, ctx=1512, majf=0, minf=9 00:18:18.138 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=74.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:18:18.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.138 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.138 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.138 filename2: (groupid=0, jobs=1): err= 0: pid=75048: Mon Nov 18 18:16:34 2024 00:18:18.138 read: IOPS=223, BW=892KiB/s (914kB/s)(8960KiB/10040msec) 00:18:18.138 slat (usec): min=5, max=4024, avg=15.19, stdev=84.87 00:18:18.138 clat (msec): min=8, max=143, avg=71.63, stdev=21.31 00:18:18.138 lat (msec): min=8, max=143, avg=71.64, stdev=21.31 00:18:18.138 clat percentiles (msec): 00:18:18.138 | 1.00th=[ 10], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:18:18.138 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 72], 00:18:18.138 | 70.00th=[ 80], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:18:18.138 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:18:18.138 | 99.99th=[ 144] 00:18:18.138 bw ( KiB/s): min= 688, max= 1383, per=4.21%, avg=889.15, stdev=160.68, samples=20 00:18:18.138 iops : min= 172, max= 345, avg=222.25, stdev=40.05, samples=20 00:18:18.138 lat (msec) : 10=1.25%, 20=0.80%, 50=15.62%, 100=70.18%, 250=12.14% 00:18:18.138 cpu : usr=37.92%, sys=2.14%, ctx=1161, majf=0, minf=9 00:18:18.138 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:18.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.138 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.138 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.138 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:18.138 00:18:18.138 Run status group 0 (all jobs): 00:18:18.138 READ: bw=20.6MiB/s (21.6MB/s), 829KiB/s-921KiB/s (849kB/s-943kB/s), io=207MiB (217MB), run=10001-10052msec 00:18:18.138 18:16:34 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:18.138 18:16:34 -- target/dif.sh@43 -- # local sub 00:18:18.138 18:16:34 -- target/dif.sh@45 -- # for sub in "$@" 00:18:18.138 18:16:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:18.138 18:16:34 -- target/dif.sh@36 -- # local sub_id=0 00:18:18.138 18:16:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@45 -- # for sub in "$@" 00:18:18.138 18:16:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:18.138 18:16:34 -- target/dif.sh@36 -- # local sub_id=1 00:18:18.138 18:16:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@45 -- # for sub in "$@" 00:18:18.138 18:16:34 -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:18.138 18:16:34 -- target/dif.sh@36 -- # local sub_id=2 00:18:18.138 18:16:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # NULL_DIF=1 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # numjobs=2 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # iodepth=8 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # runtime=5 00:18:18.138 18:16:34 -- target/dif.sh@115 -- # files=1 00:18:18.138 18:16:34 -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:18.138 18:16:34 -- target/dif.sh@28 -- # local sub 00:18:18.138 18:16:34 -- target/dif.sh@30 -- # for sub in "$@" 00:18:18.138 18:16:34 -- target/dif.sh@31 -- # create_subsystem 0 00:18:18.138 18:16:34 -- target/dif.sh@18 -- # local sub_id=0 00:18:18.138 18:16:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 bdev_null0 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 [2024-11-18 18:16:34.758766] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@30 -- # for sub in "$@" 00:18:18.138 18:16:34 -- target/dif.sh@31 -- # create_subsystem 1 00:18:18.138 18:16:34 -- target/dif.sh@18 -- # local sub_id=1 00:18:18.138 18:16:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 bdev_null1 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.138 18:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.138 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:18:18.138 18:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.138 18:16:34 -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:18.138 18:16:34 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:18.138 18:16:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:18.138 18:16:34 -- nvmf/common.sh@520 -- # config=() 00:18:18.138 18:16:34 -- nvmf/common.sh@520 -- # local subsystem config 00:18:18.138 18:16:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:18.138 18:16:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:18.138 { 00:18:18.138 "params": { 00:18:18.138 "name": "Nvme$subsystem", 00:18:18.138 "trtype": "$TEST_TRANSPORT", 00:18:18.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.138 "adrfam": "ipv4", 00:18:18.138 "trsvcid": "$NVMF_PORT", 00:18:18.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.139 "hdgst": ${hdgst:-false}, 00:18:18.139 "ddgst": ${ddgst:-false} 00:18:18.139 }, 00:18:18.139 "method": "bdev_nvme_attach_controller" 00:18:18.139 } 00:18:18.139 EOF 00:18:18.139 )") 00:18:18.139 18:16:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:18.139 18:16:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:18.139 18:16:34 -- target/dif.sh@82 -- # gen_fio_conf 00:18:18.139 18:16:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:18.139 18:16:34 -- target/dif.sh@54 -- # local file 00:18:18.139 18:16:34 -- target/dif.sh@56 -- # cat 00:18:18.139 18:16:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.139 18:16:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:18.139 18:16:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.139 18:16:34 -- nvmf/common.sh@542 -- # cat 00:18:18.139 18:16:34 -- common/autotest_common.sh@1330 -- # shift 00:18:18.139 18:16:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:18.139 18:16:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.139 18:16:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:18.139 18:16:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:18.139 18:16:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:18.139 { 00:18:18.139 "params": { 00:18:18.139 "name": "Nvme$subsystem", 00:18:18.139 "trtype": "$TEST_TRANSPORT", 00:18:18.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.139 "adrfam": "ipv4", 00:18:18.139 "trsvcid": "$NVMF_PORT", 00:18:18.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.139 "hdgst": ${hdgst:-false}, 00:18:18.139 "ddgst": ${ddgst:-false} 00:18:18.139 }, 00:18:18.139 "method": "bdev_nvme_attach_controller" 00:18:18.139 } 00:18:18.139 EOF 00:18:18.139 )") 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:18.139 18:16:34 -- nvmf/common.sh@542 -- # cat 00:18:18.139 18:16:34 -- target/dif.sh@72 -- # (( file <= files )) 00:18:18.139 18:16:34 -- target/dif.sh@73 -- # cat 00:18:18.139 18:16:34 -- target/dif.sh@72 -- # (( file++ )) 00:18:18.139 18:16:34 -- target/dif.sh@72 -- # (( file <= files )) 00:18:18.139 18:16:34 -- nvmf/common.sh@544 -- # jq . 00:18:18.139 18:16:34 -- nvmf/common.sh@545 -- # IFS=, 00:18:18.139 18:16:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:18.139 "params": { 00:18:18.139 "name": "Nvme0", 00:18:18.139 "trtype": "tcp", 00:18:18.139 "traddr": "10.0.0.2", 00:18:18.139 "adrfam": "ipv4", 00:18:18.139 "trsvcid": "4420", 00:18:18.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:18.139 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:18.139 "hdgst": false, 00:18:18.139 "ddgst": false 00:18:18.139 }, 00:18:18.139 "method": "bdev_nvme_attach_controller" 00:18:18.139 },{ 00:18:18.139 "params": { 00:18:18.139 "name": "Nvme1", 00:18:18.139 "trtype": "tcp", 00:18:18.139 "traddr": "10.0.0.2", 00:18:18.139 "adrfam": "ipv4", 00:18:18.139 "trsvcid": "4420", 00:18:18.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.139 "hdgst": false, 00:18:18.139 "ddgst": false 00:18:18.139 }, 00:18:18.139 "method": "bdev_nvme_attach_controller" 00:18:18.139 }' 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:18.139 18:16:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:18.139 18:16:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:18.139 18:16:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:18.139 18:16:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:18.139 18:16:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.139 18:16:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:18.139 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:18.139 ... 00:18:18.139 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:18.139 ... 00:18:18.139 fio-3.35 00:18:18.139 Starting 4 threads 00:18:18.139 [2024-11-18 18:16:35.392256] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:18.139 [2024-11-18 18:16:35.392322] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:22.337 00:18:22.337 filename0: (groupid=0, jobs=1): err= 0: pid=75190: Mon Nov 18 18:16:40 2024 00:18:22.337 read: IOPS=2225, BW=17.4MiB/s (18.2MB/s)(87.0MiB/5002msec) 00:18:22.337 slat (nsec): min=6916, max=65843, avg=13123.30, stdev=6218.60 00:18:22.337 clat (usec): min=669, max=7128, avg=3550.89, stdev=1010.29 00:18:22.337 lat (usec): min=690, max=7141, avg=3564.01, stdev=1011.06 00:18:22.337 clat percentiles (usec): 00:18:22.337 | 1.00th=[ 1106], 5.00th=[ 1254], 10.00th=[ 1352], 20.00th=[ 3097], 00:18:22.337 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3916], 00:18:22.337 | 70.00th=[ 3982], 80.00th=[ 4113], 90.00th=[ 4686], 95.00th=[ 4817], 00:18:22.337 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5932], 99.95th=[ 6128], 00:18:22.337 | 99.99th=[ 6718] 00:18:22.337 bw ( KiB/s): min=13232, max=20992, per=26.45%, avg=17671.11, stdev=2652.12, samples=9 00:18:22.337 iops : min= 1654, max= 2624, avg=2208.89, stdev=331.51, samples=9 00:18:22.337 lat (usec) : 750=0.18%, 1000=0.40% 00:18:22.337 lat (msec) : 2=12.78%, 4=57.72%, 10=28.93% 00:18:22.337 cpu : usr=92.22%, sys=6.94%, ctx=7, majf=0, minf=0 00:18:22.337 IO depths : 1=0.1%, 2=13.0%, 4=56.5%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.337 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.337 issued rwts: total=11131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:22.337 filename0: (groupid=0, jobs=1): err= 0: pid=75191: Mon Nov 18 18:16:40 2024 00:18:22.337 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5001msec) 00:18:22.337 slat (nsec): min=7522, max=61204, avg=16071.65, stdev=5410.00 00:18:22.337 clat (usec): min=1157, max=6465, avg=3889.63, stdev=522.75 00:18:22.337 lat (usec): min=1169, max=6480, avg=3905.70, stdev=522.86 00:18:22.337 clat percentiles (usec): 00:18:22.337 | 1.00th=[ 1991], 5.00th=[ 2737], 10.00th=[ 3523], 20.00th=[ 3720], 00:18:22.338 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4015], 00:18:22.338 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4621], 00:18:22.338 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5014], 99.95th=[ 5145], 00:18:22.338 | 99.99th=[ 5211] 00:18:22.338 bw ( KiB/s): min=15248, max=18068, per=24.31%, avg=16240.44, stdev=925.74, samples=9 00:18:22.338 iops : min= 1906, max= 2258, avg=2030.00, stdev=115.59, samples=9 00:18:22.338 lat (msec) : 2=1.25%, 4=53.66%, 10=45.09% 00:18:22.338 cpu : usr=92.30%, sys=6.90%, ctx=10, majf=0, minf=9 00:18:22.338 IO depths : 1=0.1%, 2=20.7%, 4=52.4%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 issued rwts: total=10131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:22.338 filename1: (groupid=0, jobs=1): err= 0: pid=75192: Mon Nov 18 18:16:40 2024 00:18:22.338 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5002msec) 00:18:22.338 slat (usec): min=7, max=317, avg=16.26, stdev= 7.30 00:18:22.338 clat (usec): min=1161, max=6492, avg=3888.96, stdev=523.39 00:18:22.338 lat (usec): min=1174, max=6504, avg=3905.22, stdev=523.29 00:18:22.338 clat percentiles (usec): 00:18:22.338 | 1.00th=[ 1991], 5.00th=[ 2769], 10.00th=[ 3490], 20.00th=[ 3720], 00:18:22.338 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4015], 00:18:22.338 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4621], 00:18:22.338 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5080], 99.95th=[ 5145], 00:18:22.338 | 99.99th=[ 5276] 00:18:22.338 bw ( KiB/s): min=15248, max=18068, per=24.30%, avg=16236.89, stdev=925.04, samples=9 00:18:22.338 iops : min= 1906, max= 2258, avg=2029.56, stdev=115.51, samples=9 00:18:22.338 lat (msec) : 2=1.27%, 4=53.78%, 10=44.95% 00:18:22.338 cpu : usr=91.72%, sys=7.10%, ctx=174, majf=0, minf=9 00:18:22.338 IO depths : 1=0.1%, 2=20.7%, 4=52.4%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 issued rwts: total=10131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:22.338 filename1: (groupid=0, jobs=1): err= 0: pid=75193: Mon Nov 18 18:16:40 2024 00:18:22.338 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5003msec) 00:18:22.338 slat (nsec): min=6915, max=66731, avg=13843.60, stdev=6215.65 00:18:22.338 clat (usec): min=909, max=7061, avg=3803.54, stdev=664.76 00:18:22.338 lat (usec): min=916, max=7075, avg=3817.39, stdev=665.26 00:18:22.338 clat percentiles (usec): 00:18:22.338 | 1.00th=[ 1303], 5.00th=[ 2089], 10.00th=[ 2933], 20.00th=[ 3654], 00:18:22.338 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4015], 00:18:22.338 | 70.00th=[ 4080], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4621], 00:18:22.338 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5080], 99.95th=[ 5145], 00:18:22.338 | 99.99th=[ 6456] 00:18:22.338 bw ( KiB/s): min=15248, max=19648, per=24.98%, avg=16691.78, stdev=1423.44, samples=9 00:18:22.338 iops : min= 1906, max= 2456, avg=2086.33, stdev=177.90, samples=9 00:18:22.338 lat (usec) : 1000=0.03% 00:18:22.338 lat (msec) : 2=3.93%, 4=52.36%, 10=43.69% 00:18:22.338 cpu : usr=92.44%, sys=6.68%, ctx=25, majf=0, minf=0 00:18:22.338 IO depths : 1=0.1%, 2=18.6%, 4=53.5%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.338 issued rwts: total=10390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:22.338 00:18:22.338 Run status group 0 (all jobs): 00:18:22.338 READ: bw=65.2MiB/s (68.4MB/s), 15.8MiB/s-17.4MiB/s (16.6MB/s-18.2MB/s), io=326MiB (342MB), run=5001-5003msec 00:18:22.338 18:16:40 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:18:22.338 18:16:40 -- target/dif.sh@43 -- # local sub 00:18:22.338 18:16:40 -- target/dif.sh@45 -- # for sub in "$@" 00:18:22.338 18:16:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:22.338 18:16:40 -- target/dif.sh@36 -- # local sub_id=0 00:18:22.338 18:16:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@45 -- # for sub in "$@" 00:18:22.338 18:16:40 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:22.338 18:16:40 -- target/dif.sh@36 -- # local sub_id=1 00:18:22.338 18:16:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 00:18:22.338 real 0m23.179s 00:18:22.338 user 2m4.079s 00:18:22.338 sys 0m7.890s 00:18:22.338 18:16:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:22.338 ************************************ 00:18:22.338 END TEST fio_dif_rand_params 00:18:22.338 ************************************ 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:18:22.338 18:16:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:22.338 18:16:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 ************************************ 00:18:22.338 START TEST fio_dif_digest 00:18:22.338 ************************************ 00:18:22.338 18:16:40 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:18:22.338 18:16:40 -- target/dif.sh@123 -- # local NULL_DIF 00:18:22.338 18:16:40 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:18:22.338 18:16:40 -- target/dif.sh@125 -- # local hdgst ddgst 00:18:22.338 18:16:40 -- target/dif.sh@127 -- # NULL_DIF=3 00:18:22.338 18:16:40 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:18:22.338 18:16:40 -- target/dif.sh@127 -- # numjobs=3 00:18:22.338 18:16:40 -- target/dif.sh@127 -- # iodepth=3 00:18:22.338 18:16:40 -- target/dif.sh@127 -- # runtime=10 00:18:22.338 18:16:40 -- target/dif.sh@128 -- # hdgst=true 00:18:22.338 18:16:40 -- target/dif.sh@128 -- # ddgst=true 00:18:22.338 18:16:40 -- target/dif.sh@130 -- # create_subsystems 0 00:18:22.338 18:16:40 -- target/dif.sh@28 -- # local sub 00:18:22.338 18:16:40 -- target/dif.sh@30 -- # for sub in "$@" 00:18:22.338 18:16:40 -- target/dif.sh@31 -- # create_subsystem 0 00:18:22.338 18:16:40 -- target/dif.sh@18 -- # local sub_id=0 00:18:22.338 18:16:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 bdev_null0 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:22.338 18:16:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.338 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:18:22.338 [2024-11-18 18:16:40.800771] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.338 18:16:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.338 18:16:40 -- target/dif.sh@131 -- # fio /dev/fd/62 00:18:22.338 18:16:40 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:18:22.338 18:16:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:22.338 18:16:40 -- nvmf/common.sh@520 -- # config=() 00:18:22.338 18:16:40 -- nvmf/common.sh@520 -- # local subsystem config 00:18:22.338 18:16:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.338 18:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:22.338 18:16:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.338 18:16:40 -- target/dif.sh@82 -- # gen_fio_conf 00:18:22.338 18:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:22.338 { 00:18:22.338 "params": { 00:18:22.338 "name": "Nvme$subsystem", 00:18:22.338 "trtype": "$TEST_TRANSPORT", 00:18:22.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.338 "adrfam": "ipv4", 00:18:22.338 "trsvcid": "$NVMF_PORT", 00:18:22.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.338 "hdgst": ${hdgst:-false}, 00:18:22.338 "ddgst": ${ddgst:-false} 00:18:22.338 }, 00:18:22.338 "method": "bdev_nvme_attach_controller" 00:18:22.338 } 00:18:22.338 EOF 00:18:22.338 )") 00:18:22.338 18:16:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:22.338 18:16:40 -- target/dif.sh@54 -- # local file 00:18:22.338 18:16:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.338 18:16:40 -- target/dif.sh@56 -- # cat 00:18:22.338 18:16:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:22.338 18:16:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.339 18:16:40 -- common/autotest_common.sh@1330 -- # shift 00:18:22.339 18:16:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:22.339 18:16:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.339 18:16:40 -- nvmf/common.sh@542 -- # cat 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.339 18:16:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:22.339 18:16:40 -- target/dif.sh@72 -- # (( file <= files )) 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:22.339 18:16:40 -- nvmf/common.sh@544 -- # jq . 00:18:22.339 18:16:40 -- nvmf/common.sh@545 -- # IFS=, 00:18:22.339 18:16:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:22.339 "params": { 00:18:22.339 "name": "Nvme0", 00:18:22.339 "trtype": "tcp", 00:18:22.339 "traddr": "10.0.0.2", 00:18:22.339 "adrfam": "ipv4", 00:18:22.339 "trsvcid": "4420", 00:18:22.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:22.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:22.339 "hdgst": true, 00:18:22.339 "ddgst": true 00:18:22.339 }, 00:18:22.339 "method": "bdev_nvme_attach_controller" 00:18:22.339 }' 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:22.339 18:16:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:22.339 18:16:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:22.339 18:16:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:22.339 18:16:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:22.339 18:16:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:22.339 18:16:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:22.598 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:22.598 ... 00:18:22.598 fio-3.35 00:18:22.598 Starting 3 threads 00:18:22.856 [2024-11-18 18:16:41.361604] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:22.856 [2024-11-18 18:16:41.361697] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:35.066 00:18:35.066 filename0: (groupid=0, jobs=1): err= 0: pid=75299: Mon Nov 18 18:16:51 2024 00:18:35.066 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(294MiB/10003msec) 00:18:35.066 slat (nsec): min=6985, max=56127, avg=12957.59, stdev=4940.95 00:18:35.066 clat (usec): min=10254, max=14658, avg=12745.01, stdev=590.02 00:18:35.066 lat (usec): min=10267, max=14687, avg=12757.97, stdev=590.90 00:18:35.066 clat percentiles (usec): 00:18:35.066 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:18:35.066 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:18:35.066 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:18:35.066 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:18:35.066 | 99.99th=[14615] 00:18:35.066 bw ( KiB/s): min=29184, max=31488, per=33.36%, avg=30070.11, stdev=641.45, samples=19 00:18:35.066 iops : min= 228, max= 246, avg=234.89, stdev= 5.02, samples=19 00:18:35.066 lat (msec) : 20=100.00% 00:18:35.066 cpu : usr=91.92%, sys=7.58%, ctx=6, majf=0, minf=9 00:18:35.066 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.066 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:35.066 filename0: (groupid=0, jobs=1): err= 0: pid=75300: Mon Nov 18 18:16:51 2024 00:18:35.066 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(294MiB/10003msec) 00:18:35.066 slat (nsec): min=6882, max=46285, avg=9779.86, stdev=3923.16 00:18:35.066 clat (usec): min=8510, max=14597, avg=12750.92, stdev=599.80 00:18:35.066 lat (usec): min=8517, max=14609, avg=12760.70, stdev=600.08 00:18:35.066 clat percentiles (usec): 00:18:35.066 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:18:35.066 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:18:35.066 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:18:35.066 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:18:35.066 | 99.99th=[14615] 00:18:35.066 bw ( KiB/s): min=29184, max=31488, per=33.36%, avg=30073.26, stdev=640.67, samples=19 00:18:35.066 iops : min= 228, max= 246, avg=234.95, stdev= 5.01, samples=19 00:18:35.066 lat (msec) : 10=0.13%, 20=99.87% 00:18:35.066 cpu : usr=91.96%, sys=7.55%, ctx=11, majf=0, minf=0 00:18:35.066 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.066 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:35.066 filename0: (groupid=0, jobs=1): err= 0: pid=75301: Mon Nov 18 18:16:51 2024 00:18:35.066 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10001msec) 00:18:35.066 slat (usec): min=6, max=136, avg=13.79, stdev= 6.30 00:18:35.066 clat (usec): min=10272, max=20968, avg=12756.72, stdev=654.11 00:18:35.066 lat (usec): min=10286, max=20999, avg=12770.51, stdev=655.04 00:18:35.066 clat percentiles (usec): 00:18:35.066 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:18:35.066 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:18:35.066 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:18:35.066 | 99.00th=[14222], 99.50th=[14484], 99.90th=[20841], 99.95th=[20841], 00:18:35.066 | 99.99th=[20841] 00:18:35.066 bw ( KiB/s): min=29184, max=31488, per=33.32%, avg=30032.74, stdev=617.88, samples=19 00:18:35.066 iops : min= 228, max= 246, avg=234.58, stdev= 4.87, samples=19 00:18:35.066 lat (msec) : 20=99.87%, 50=0.13% 00:18:35.066 cpu : usr=91.50%, sys=7.61%, ctx=277, majf=0, minf=9 00:18:35.066 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.066 issued rwts: total=2346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.066 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:35.066 00:18:35.066 Run status group 0 (all jobs): 00:18:35.066 READ: bw=88.0MiB/s (92.3MB/s), 29.3MiB/s-29.4MiB/s (30.7MB/s-30.8MB/s), io=881MiB (923MB), run=10001-10003msec 00:18:35.066 18:16:51 -- target/dif.sh@132 -- # destroy_subsystems 0 00:18:35.066 18:16:51 -- target/dif.sh@43 -- # local sub 00:18:35.066 18:16:51 -- target/dif.sh@45 -- # for sub in "$@" 00:18:35.066 18:16:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:35.066 18:16:51 -- target/dif.sh@36 -- # local sub_id=0 00:18:35.066 18:16:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:35.066 18:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.066 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:35.066 18:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.066 18:16:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:35.066 18:16:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.066 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:35.066 18:16:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.066 00:18:35.066 real 0m10.900s 00:18:35.066 user 0m28.129s 00:18:35.066 sys 0m2.510s 00:18:35.066 18:16:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:35.066 ************************************ 00:18:35.066 END TEST fio_dif_digest 00:18:35.066 18:16:51 -- common/autotest_common.sh@10 -- # set +x 00:18:35.066 ************************************ 00:18:35.066 18:16:51 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:35.066 18:16:51 -- target/dif.sh@147 -- # nvmftestfini 00:18:35.066 18:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.066 18:16:51 -- nvmf/common.sh@116 -- # sync 00:18:35.066 18:16:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.066 18:16:51 -- nvmf/common.sh@119 -- # set +e 00:18:35.066 18:16:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.066 18:16:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.066 rmmod nvme_tcp 00:18:35.066 rmmod nvme_fabrics 00:18:35.066 rmmod nvme_keyring 00:18:35.066 18:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.066 18:16:51 -- nvmf/common.sh@123 -- # set -e 00:18:35.066 18:16:51 -- nvmf/common.sh@124 -- # return 0 00:18:35.066 18:16:51 -- nvmf/common.sh@477 -- # '[' -n 74534 ']' 00:18:35.066 18:16:51 -- nvmf/common.sh@478 -- # killprocess 74534 00:18:35.066 18:16:51 -- common/autotest_common.sh@936 -- # '[' -z 74534 ']' 00:18:35.066 18:16:51 -- common/autotest_common.sh@940 -- # kill -0 74534 00:18:35.066 18:16:51 -- common/autotest_common.sh@941 -- # uname 00:18:35.066 18:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.066 18:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74534 00:18:35.066 18:16:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.066 18:16:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.066 killing process with pid 74534 00:18:35.066 18:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74534' 00:18:35.066 18:16:51 -- common/autotest_common.sh@955 -- # kill 74534 00:18:35.066 18:16:51 -- common/autotest_common.sh@960 -- # wait 74534 00:18:35.066 18:16:52 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:18:35.066 18:16:52 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:35.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:35.067 Waiting for block devices as requested 00:18:35.067 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:35.067 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:35.067 18:16:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.067 18:16:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.067 18:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.067 18:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:35.067 18:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.067 18:16:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:35.067 00:18:35.067 real 0m59.232s 00:18:35.067 user 3m47.537s 00:18:35.067 sys 0m18.895s 00:18:35.067 18:16:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:35.067 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:18:35.067 ************************************ 00:18:35.067 END TEST nvmf_dif 00:18:35.067 ************************************ 00:18:35.067 18:16:52 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:35.067 18:16:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:35.067 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:18:35.067 ************************************ 00:18:35.067 START TEST nvmf_abort_qd_sizes 00:18:35.067 ************************************ 00:18:35.067 18:16:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:18:35.067 * Looking for test storage... 00:18:35.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:35.067 18:16:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:35.067 18:16:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:35.067 18:16:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:35.067 18:16:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:35.067 18:16:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:35.067 18:16:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:35.067 18:16:52 -- scripts/common.sh@335 -- # IFS=.-: 00:18:35.067 18:16:52 -- scripts/common.sh@335 -- # read -ra ver1 00:18:35.067 18:16:52 -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.067 18:16:52 -- scripts/common.sh@336 -- # read -ra ver2 00:18:35.067 18:16:52 -- scripts/common.sh@337 -- # local 'op=<' 00:18:35.067 18:16:52 -- scripts/common.sh@339 -- # ver1_l=2 00:18:35.067 18:16:52 -- scripts/common.sh@340 -- # ver2_l=1 00:18:35.067 18:16:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:35.067 18:16:52 -- scripts/common.sh@343 -- # case "$op" in 00:18:35.067 18:16:52 -- scripts/common.sh@344 -- # : 1 00:18:35.067 18:16:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:35.067 18:16:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.067 18:16:52 -- scripts/common.sh@364 -- # decimal 1 00:18:35.067 18:16:52 -- scripts/common.sh@352 -- # local d=1 00:18:35.067 18:16:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.067 18:16:52 -- scripts/common.sh@354 -- # echo 1 00:18:35.067 18:16:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:35.067 18:16:52 -- scripts/common.sh@365 -- # decimal 2 00:18:35.067 18:16:52 -- scripts/common.sh@352 -- # local d=2 00:18:35.067 18:16:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.067 18:16:52 -- scripts/common.sh@354 -- # echo 2 00:18:35.067 18:16:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:35.067 18:16:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:35.067 18:16:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:35.067 18:16:52 -- scripts/common.sh@367 -- # return 0 00:18:35.067 18:16:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:35.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.067 --rc genhtml_branch_coverage=1 00:18:35.067 --rc genhtml_function_coverage=1 00:18:35.067 --rc genhtml_legend=1 00:18:35.067 --rc geninfo_all_blocks=1 00:18:35.067 --rc geninfo_unexecuted_blocks=1 00:18:35.067 00:18:35.067 ' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:35.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.067 --rc genhtml_branch_coverage=1 00:18:35.067 --rc genhtml_function_coverage=1 00:18:35.067 --rc genhtml_legend=1 00:18:35.067 --rc geninfo_all_blocks=1 00:18:35.067 --rc geninfo_unexecuted_blocks=1 00:18:35.067 00:18:35.067 ' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:35.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.067 --rc genhtml_branch_coverage=1 00:18:35.067 --rc genhtml_function_coverage=1 00:18:35.067 --rc genhtml_legend=1 00:18:35.067 --rc geninfo_all_blocks=1 00:18:35.067 --rc geninfo_unexecuted_blocks=1 00:18:35.067 00:18:35.067 ' 00:18:35.067 18:16:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:35.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.067 --rc genhtml_branch_coverage=1 00:18:35.067 --rc genhtml_function_coverage=1 00:18:35.067 --rc genhtml_legend=1 00:18:35.067 --rc geninfo_all_blocks=1 00:18:35.067 --rc geninfo_unexecuted_blocks=1 00:18:35.067 00:18:35.067 ' 00:18:35.067 18:16:52 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.067 18:16:52 -- nvmf/common.sh@7 -- # uname -s 00:18:35.067 18:16:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.067 18:16:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.067 18:16:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.067 18:16:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.067 18:16:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.067 18:16:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.067 18:16:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.067 18:16:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.067 18:16:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.067 18:16:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:18:35.067 18:16:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 00:18:35.067 18:16:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.067 18:16:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.067 18:16:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.067 18:16:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.067 18:16:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.067 18:16:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.067 18:16:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.067 18:16:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 18:16:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 18:16:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 18:16:52 -- paths/export.sh@5 -- # export PATH 00:18:35.067 18:16:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.067 18:16:52 -- nvmf/common.sh@46 -- # : 0 00:18:35.067 18:16:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.067 18:16:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.067 18:16:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.067 18:16:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.067 18:16:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.067 18:16:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.067 18:16:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.067 18:16:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.067 18:16:52 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:18:35.067 18:16:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:35.067 18:16:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.067 18:16:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.067 18:16:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.067 18:16:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.067 18:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.067 18:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:35.067 18:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.067 18:16:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:35.067 18:16:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:35.067 18:16:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.067 18:16:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.067 18:16:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:35.067 18:16:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:35.068 18:16:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.068 18:16:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.068 18:16:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.068 18:16:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.068 18:16:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.068 18:16:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.068 18:16:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.068 18:16:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.068 18:16:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:35.068 18:16:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:35.068 Cannot find device "nvmf_tgt_br" 00:18:35.068 18:16:52 -- nvmf/common.sh@154 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.068 Cannot find device "nvmf_tgt_br2" 00:18:35.068 18:16:52 -- nvmf/common.sh@155 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:35.068 18:16:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:35.068 Cannot find device "nvmf_tgt_br" 00:18:35.068 18:16:52 -- nvmf/common.sh@157 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:35.068 Cannot find device "nvmf_tgt_br2" 00:18:35.068 18:16:52 -- nvmf/common.sh@158 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:35.068 18:16:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:35.068 18:16:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.068 18:16:52 -- nvmf/common.sh@161 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:35.068 18:16:52 -- nvmf/common.sh@162 -- # true 00:18:35.068 18:16:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:35.068 18:16:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:35.068 18:16:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:35.068 18:16:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:35.068 18:16:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:35.068 18:16:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:35.068 18:16:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:35.068 18:16:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:35.068 18:16:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:35.068 18:16:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:35.068 18:16:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:35.068 18:16:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:35.068 18:16:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:35.068 18:16:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:35.068 18:16:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:35.068 18:16:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:35.068 18:16:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:35.068 18:16:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:35.068 18:16:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:35.068 18:16:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:35.068 18:16:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:35.068 18:16:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:35.068 18:16:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:35.068 18:16:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:35.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:18:35.068 00:18:35.068 --- 10.0.0.2 ping statistics --- 00:18:35.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.068 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:35.068 18:16:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:35.068 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:35.068 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:35.068 00:18:35.068 --- 10.0.0.3 ping statistics --- 00:18:35.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.068 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:35.068 18:16:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:35.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:35.068 00:18:35.068 --- 10.0.0.1 ping statistics --- 00:18:35.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.068 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:35.068 18:16:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.068 18:16:53 -- nvmf/common.sh@421 -- # return 0 00:18:35.068 18:16:53 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:35.068 18:16:53 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:35.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:35.328 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:18:35.587 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:18:35.587 18:16:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.587 18:16:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:35.587 18:16:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:35.587 18:16:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.587 18:16:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:35.587 18:16:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:35.587 18:16:54 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:18:35.587 18:16:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:35.587 18:16:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:35.587 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.587 18:16:54 -- nvmf/common.sh@469 -- # nvmfpid=75898 00:18:35.587 18:16:54 -- nvmf/common.sh@470 -- # waitforlisten 75898 00:18:35.587 18:16:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:18:35.587 18:16:54 -- common/autotest_common.sh@829 -- # '[' -z 75898 ']' 00:18:35.587 18:16:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.587 18:16:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.587 18:16:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.587 18:16:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.587 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.587 [2024-11-18 18:16:54.077278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:35.587 [2024-11-18 18:16:54.077393] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.847 [2024-11-18 18:16:54.218803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.847 [2024-11-18 18:16:54.289669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:35.847 [2024-11-18 18:16:54.289873] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.847 [2024-11-18 18:16:54.289891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.847 [2024-11-18 18:16:54.289902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.847 [2024-11-18 18:16:54.290057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.847 [2024-11-18 18:16:54.292574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.847 [2024-11-18 18:16:54.292690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.847 [2024-11-18 18:16:54.292700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.787 18:16:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.787 18:16:55 -- common/autotest_common.sh@862 -- # return 0 00:18:36.787 18:16:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:36.787 18:16:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.787 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.787 18:16:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.787 18:16:55 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:18:36.787 18:16:55 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:18:36.787 18:16:55 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:18:36.787 18:16:55 -- scripts/common.sh@311 -- # local bdf bdfs 00:18:36.787 18:16:55 -- scripts/common.sh@312 -- # local nvmes 00:18:36.787 18:16:55 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:18:36.787 18:16:55 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:36.787 18:16:55 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:18:36.787 18:16:55 -- scripts/common.sh@297 -- # local bdf= 00:18:36.787 18:16:55 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:18:36.787 18:16:55 -- scripts/common.sh@232 -- # local class 00:18:36.787 18:16:55 -- scripts/common.sh@233 -- # local subclass 00:18:36.787 18:16:55 -- scripts/common.sh@234 -- # local progif 00:18:36.787 18:16:55 -- scripts/common.sh@235 -- # printf %02x 1 00:18:36.787 18:16:55 -- scripts/common.sh@235 -- # class=01 00:18:36.787 18:16:55 -- scripts/common.sh@236 -- # printf %02x 8 00:18:36.787 18:16:55 -- scripts/common.sh@236 -- # subclass=08 00:18:36.787 18:16:55 -- scripts/common.sh@237 -- # printf %02x 2 00:18:36.787 18:16:55 -- scripts/common.sh@237 -- # progif=02 00:18:36.787 18:16:55 -- scripts/common.sh@239 -- # hash lspci 00:18:36.787 18:16:55 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:18:36.787 18:16:55 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:18:36.787 18:16:55 -- scripts/common.sh@242 -- # grep -i -- -p02 00:18:36.787 18:16:55 -- scripts/common.sh@244 -- # tr -d '"' 00:18:36.787 18:16:55 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:36.787 18:16:55 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:36.787 18:16:55 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:18:36.787 18:16:55 -- scripts/common.sh@15 -- # local i 00:18:36.787 18:16:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:18:36.787 18:16:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:36.787 18:16:55 -- scripts/common.sh@24 -- # return 0 00:18:36.787 18:16:55 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:18:36.787 18:16:55 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:36.787 18:16:55 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:18:36.787 18:16:55 -- scripts/common.sh@15 -- # local i 00:18:36.787 18:16:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:18:36.787 18:16:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:18:36.787 18:16:55 -- scripts/common.sh@24 -- # return 0 00:18:36.787 18:16:55 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:18:36.787 18:16:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:36.787 18:16:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:18:36.787 18:16:55 -- scripts/common.sh@322 -- # uname -s 00:18:36.787 18:16:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:36.787 18:16:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:36.788 18:16:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:18:36.788 18:16:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:18:36.788 18:16:55 -- scripts/common.sh@322 -- # uname -s 00:18:36.788 18:16:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:18:36.788 18:16:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:18:36.788 18:16:55 -- scripts/common.sh@327 -- # (( 2 )) 00:18:36.788 18:16:55 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:18:36.788 18:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:36.788 18:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 ************************************ 00:18:36.788 START TEST spdk_target_abort 00:18:36.788 ************************************ 00:18:36.788 18:16:55 -- common/autotest_common.sh@1114 -- # spdk_target 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:18:36.788 18:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 spdk_targetn1 00:18:36.788 18:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:36.788 18:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 [2024-11-18 18:16:55.268761] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.788 18:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:18:36.788 18:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 18:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:18:36.788 18:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 18:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:18:36.788 18:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.788 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.788 [2024-11-18 18:16:55.296981] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.788 18:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:36.788 18:16:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:40.143 Initializing NVMe Controllers 00:18:40.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:40.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:40.143 Initialization complete. Launching workers. 00:18:40.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10429, failed: 0 00:18:40.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1034, failed to submit 9395 00:18:40.143 success 833, unsuccess 201, failed 0 00:18:40.143 18:16:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:40.143 18:16:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:43.431 Initializing NVMe Controllers 00:18:43.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:43.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:43.431 Initialization complete. Launching workers. 00:18:43.431 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8880, failed: 0 00:18:43.431 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1169, failed to submit 7711 00:18:43.431 success 383, unsuccess 786, failed 0 00:18:43.431 18:17:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:43.431 18:17:01 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:18:46.719 Initializing NVMe Controllers 00:18:46.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:18:46.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:18:46.719 Initialization complete. Launching workers. 00:18:46.719 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31400, failed: 0 00:18:46.719 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2298, failed to submit 29102 00:18:46.719 success 448, unsuccess 1850, failed 0 00:18:46.719 18:17:05 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:18:46.719 18:17:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.719 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.719 18:17:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.719 18:17:05 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:18:46.719 18:17:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.719 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.719 18:17:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.719 18:17:05 -- target/abort_qd_sizes.sh@62 -- # killprocess 75898 00:18:46.719 18:17:05 -- common/autotest_common.sh@936 -- # '[' -z 75898 ']' 00:18:46.719 18:17:05 -- common/autotest_common.sh@940 -- # kill -0 75898 00:18:46.719 18:17:05 -- common/autotest_common.sh@941 -- # uname 00:18:46.719 18:17:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.719 18:17:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75898 00:18:46.719 killing process with pid 75898 00:18:46.719 18:17:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:46.719 18:17:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:46.719 18:17:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75898' 00:18:46.719 18:17:05 -- common/autotest_common.sh@955 -- # kill 75898 00:18:46.719 18:17:05 -- common/autotest_common.sh@960 -- # wait 75898 00:18:46.978 ************************************ 00:18:46.978 END TEST spdk_target_abort 00:18:46.978 ************************************ 00:18:46.978 00:18:46.978 real 0m10.264s 00:18:46.978 user 0m41.977s 00:18:46.978 sys 0m2.057s 00:18:46.978 18:17:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:46.978 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 18:17:05 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:18:46.978 18:17:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:46.978 18:17:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.978 18:17:05 -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 ************************************ 00:18:46.978 START TEST kernel_target_abort 00:18:46.978 ************************************ 00:18:46.978 18:17:05 -- common/autotest_common.sh@1114 -- # kernel_target 00:18:46.978 18:17:05 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:18:46.978 18:17:05 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:18:46.978 18:17:05 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:18:46.978 18:17:05 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:18:46.978 18:17:05 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:18:46.978 18:17:05 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:46.978 18:17:05 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:46.978 18:17:05 -- nvmf/common.sh@627 -- # local block nvme 00:18:46.978 18:17:05 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:18:46.978 18:17:05 -- nvmf/common.sh@630 -- # modprobe nvmet 00:18:46.978 18:17:05 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:46.978 18:17:05 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:47.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:47.496 Waiting for block devices as requested 00:18:47.496 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:18:47.496 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:18:47.496 18:17:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:47.496 18:17:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:47.496 18:17:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:18:47.496 18:17:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:18:47.496 18:17:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:47.496 No valid GPT data, bailing 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # pt= 00:18:47.756 18:17:06 -- scripts/common.sh@394 -- # return 1 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:18:47.756 18:17:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:47.756 18:17:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:18:47.756 18:17:06 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:18:47.756 18:17:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:47.756 No valid GPT data, bailing 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # pt= 00:18:47.756 18:17:06 -- scripts/common.sh@394 -- # return 1 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:18:47.756 18:17:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:47.756 18:17:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:18:47.756 18:17:06 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:18:47.756 18:17:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:18:47.756 No valid GPT data, bailing 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # pt= 00:18:47.756 18:17:06 -- scripts/common.sh@394 -- # return 1 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:18:47.756 18:17:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:18:47.756 18:17:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:18:47.756 18:17:06 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:18:47.756 18:17:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:18:47.756 No valid GPT data, bailing 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:18:47.756 18:17:06 -- scripts/common.sh@393 -- # pt= 00:18:47.756 18:17:06 -- scripts/common.sh@394 -- # return 1 00:18:47.756 18:17:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:18:47.756 18:17:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:18:47.756 18:17:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:18:47.756 18:17:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:47.756 18:17:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:47.756 18:17:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:18:47.756 18:17:06 -- nvmf/common.sh@654 -- # echo 1 00:18:47.756 18:17:06 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:18:47.756 18:17:06 -- nvmf/common.sh@656 -- # echo 1 00:18:47.756 18:17:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:18:47.756 18:17:06 -- nvmf/common.sh@663 -- # echo tcp 00:18:47.756 18:17:06 -- nvmf/common.sh@664 -- # echo 4420 00:18:47.756 18:17:06 -- nvmf/common.sh@665 -- # echo ipv4 00:18:47.756 18:17:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:47.756 18:17:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 --hostid=9f9bd036-cbb8-4723-a3c8-3fe42e60e3d5 -a 10.0.0.1 -t tcp -s 4420 00:18:48.015 00:18:48.015 Discovery Log Number of Records 2, Generation counter 2 00:18:48.015 =====Discovery Log Entry 0====== 00:18:48.015 trtype: tcp 00:18:48.015 adrfam: ipv4 00:18:48.015 subtype: current discovery subsystem 00:18:48.015 treq: not specified, sq flow control disable supported 00:18:48.015 portid: 1 00:18:48.015 trsvcid: 4420 00:18:48.015 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:48.015 traddr: 10.0.0.1 00:18:48.015 eflags: none 00:18:48.015 sectype: none 00:18:48.015 =====Discovery Log Entry 1====== 00:18:48.015 trtype: tcp 00:18:48.015 adrfam: ipv4 00:18:48.015 subtype: nvme subsystem 00:18:48.015 treq: not specified, sq flow control disable supported 00:18:48.015 portid: 1 00:18:48.015 trsvcid: 4420 00:18:48.015 subnqn: kernel_target 00:18:48.015 traddr: 10.0.0.1 00:18:48.015 eflags: none 00:18:48.015 sectype: none 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:48.015 18:17:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:51.302 Initializing NVMe Controllers 00:18:51.302 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:51.302 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:51.302 Initialization complete. Launching workers. 00:18:51.302 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30934, failed: 0 00:18:51.302 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30934, failed to submit 0 00:18:51.302 success 0, unsuccess 30934, failed 0 00:18:51.302 18:17:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:51.302 18:17:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:54.659 Initializing NVMe Controllers 00:18:54.659 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:54.659 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:54.659 Initialization complete. Launching workers. 00:18:54.659 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65132, failed: 0 00:18:54.659 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27070, failed to submit 38062 00:18:54.659 success 0, unsuccess 27070, failed 0 00:18:54.659 18:17:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:18:54.659 18:17:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:18:57.979 Initializing NVMe Controllers 00:18:57.979 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:18:57.979 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:18:57.979 Initialization complete. Launching workers. 00:18:57.979 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75769, failed: 0 00:18:57.979 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18898, failed to submit 56871 00:18:57.979 success 0, unsuccess 18898, failed 0 00:18:57.979 18:17:15 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:18:57.979 18:17:15 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:18:57.979 18:17:15 -- nvmf/common.sh@677 -- # echo 0 00:18:57.979 18:17:15 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:18:57.979 18:17:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:18:57.979 18:17:15 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:57.979 18:17:15 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:18:57.979 18:17:15 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:18:57.979 18:17:15 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:18:57.979 00:18:57.979 real 0m10.472s 00:18:57.979 user 0m5.544s 00:18:57.979 sys 0m2.373s 00:18:57.979 18:17:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:57.979 18:17:15 -- common/autotest_common.sh@10 -- # set +x 00:18:57.979 ************************************ 00:18:57.979 END TEST kernel_target_abort 00:18:57.979 ************************************ 00:18:57.979 18:17:16 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:18:57.979 18:17:16 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:18:57.979 18:17:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.979 18:17:16 -- nvmf/common.sh@116 -- # sync 00:18:57.979 18:17:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.979 18:17:16 -- nvmf/common.sh@119 -- # set +e 00:18:57.979 18:17:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.979 18:17:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.979 rmmod nvme_tcp 00:18:57.979 rmmod nvme_fabrics 00:18:57.979 rmmod nvme_keyring 00:18:57.979 18:17:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.979 18:17:16 -- nvmf/common.sh@123 -- # set -e 00:18:57.979 18:17:16 -- nvmf/common.sh@124 -- # return 0 00:18:57.979 18:17:16 -- nvmf/common.sh@477 -- # '[' -n 75898 ']' 00:18:57.979 18:17:16 -- nvmf/common.sh@478 -- # killprocess 75898 00:18:57.979 18:17:16 -- common/autotest_common.sh@936 -- # '[' -z 75898 ']' 00:18:57.979 18:17:16 -- common/autotest_common.sh@940 -- # kill -0 75898 00:18:57.979 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75898) - No such process 00:18:57.979 Process with pid 75898 is not found 00:18:57.979 18:17:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75898 is not found' 00:18:57.979 18:17:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:18:57.980 18:17:16 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:58.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.240 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:18:58.240 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:18:58.240 18:17:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:58.240 18:17:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:58.240 18:17:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.240 18:17:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:58.240 18:17:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.240 18:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:58.240 18:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.499 18:17:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:58.499 00:18:58.499 real 0m24.219s 00:18:58.499 user 0m48.998s 00:18:58.499 sys 0m5.713s 00:18:58.499 18:17:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.499 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.499 ************************************ 00:18:58.499 END TEST nvmf_abort_qd_sizes 00:18:58.499 ************************************ 00:18:58.499 18:17:16 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:58.499 18:17:16 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:18:58.499 18:17:16 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:18:58.499 18:17:16 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:18:58.499 18:17:16 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:18:58.499 18:17:16 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:18:58.499 18:17:16 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:18:58.499 18:17:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.499 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.499 18:17:16 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:18:58.499 18:17:16 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:18:58.499 18:17:16 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:18:58.499 18:17:16 -- common/autotest_common.sh@10 -- # set +x 00:19:00.404 INFO: APP EXITING 00:19:00.404 INFO: killing all VMs 00:19:00.404 INFO: killing vhost app 00:19:00.404 INFO: EXIT DONE 00:19:00.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.923 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:00.923 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:01.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:01.490 Cleaning 00:19:01.490 Removing: /var/run/dpdk/spdk0/config 00:19:01.490 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:01.490 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:01.490 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:01.490 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:01.490 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:01.490 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:01.490 Removing: /var/run/dpdk/spdk1/config 00:19:01.490 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:01.490 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:01.490 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:01.490 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:01.490 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:01.490 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:01.490 Removing: /var/run/dpdk/spdk2/config 00:19:01.490 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:01.490 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:01.490 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:01.749 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:01.749 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:01.749 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:01.749 Removing: /var/run/dpdk/spdk3/config 00:19:01.749 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:01.749 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:01.749 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:01.749 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:01.749 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:01.749 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:01.749 Removing: /var/run/dpdk/spdk4/config 00:19:01.749 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:01.749 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:01.749 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:01.749 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:01.749 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:01.749 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:01.749 Removing: /dev/shm/nvmf_trace.0 00:19:01.749 Removing: /dev/shm/spdk_tgt_trace.pid53824 00:19:01.749 Removing: /var/run/dpdk/spdk0 00:19:01.749 Removing: /var/run/dpdk/spdk1 00:19:01.749 Removing: /var/run/dpdk/spdk2 00:19:01.749 Removing: /var/run/dpdk/spdk3 00:19:01.749 Removing: /var/run/dpdk/spdk4 00:19:01.749 Removing: /var/run/dpdk/spdk_pid53672 00:19:01.749 Removing: /var/run/dpdk/spdk_pid53824 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54077 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54262 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54415 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54481 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54564 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54662 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54746 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54779 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54809 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54883 00:19:01.749 Removing: /var/run/dpdk/spdk_pid54964 00:19:01.749 Removing: /var/run/dpdk/spdk_pid55396 00:19:01.749 Removing: /var/run/dpdk/spdk_pid55443 00:19:01.749 Removing: /var/run/dpdk/spdk_pid55494 00:19:01.749 Removing: /var/run/dpdk/spdk_pid55510 00:19:01.749 Removing: /var/run/dpdk/spdk_pid55571 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55587 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55649 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55665 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55710 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55728 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55774 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55792 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55916 00:19:01.750 Removing: /var/run/dpdk/spdk_pid55946 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56033 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56079 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56109 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56162 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56187 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56216 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56230 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56270 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56284 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56313 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56333 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56367 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56381 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56416 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56435 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56470 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56484 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56518 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56538 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56567 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56586 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56621 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56635 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56669 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56689 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56718 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56743 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56772 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56786 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56826 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56840 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56869 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56893 00:19:01.750 Removing: /var/run/dpdk/spdk_pid56923 00:19:02.009 Removing: /var/run/dpdk/spdk_pid56937 00:19:02.009 Removing: /var/run/dpdk/spdk_pid56976 00:19:02.009 Removing: /var/run/dpdk/spdk_pid56994 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57032 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57049 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57086 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57108 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57141 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57156 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57192 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57263 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57351 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57689 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57702 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57733 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57744 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57759 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57777 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57784 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57803 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57821 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57828 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57847 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57865 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57872 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57891 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57909 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57916 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57935 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57949 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57961 00:19:02.009 Removing: /var/run/dpdk/spdk_pid57979 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58004 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58022 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58044 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58114 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58141 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58150 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58173 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58188 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58190 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58231 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58242 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58276 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58279 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58287 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58294 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58302 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58309 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58319 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58326 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58353 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58379 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58389 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58417 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58427 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58434 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58475 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58485 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58513 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58515 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58528 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58530 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58543 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58545 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58552 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58560 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58641 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58683 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58789 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58815 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58859 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58879 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58888 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58908 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58938 00:19:02.009 Removing: /var/run/dpdk/spdk_pid58952 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59028 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59042 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59084 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59171 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59216 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59244 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59337 00:19:02.009 Removing: /var/run/dpdk/spdk_pid59383 00:19:02.010 Removing: /var/run/dpdk/spdk_pid59409 00:19:02.269 Removing: /var/run/dpdk/spdk_pid59638 00:19:02.269 Removing: /var/run/dpdk/spdk_pid59730 00:19:02.269 Removing: /var/run/dpdk/spdk_pid59752 00:19:02.269 Removing: /var/run/dpdk/spdk_pid60089 00:19:02.269 Removing: /var/run/dpdk/spdk_pid60127 00:19:02.269 Removing: /var/run/dpdk/spdk_pid60439 00:19:02.269 Removing: /var/run/dpdk/spdk_pid60858 00:19:02.269 Removing: /var/run/dpdk/spdk_pid61127 00:19:02.269 Removing: /var/run/dpdk/spdk_pid61897 00:19:02.269 Removing: /var/run/dpdk/spdk_pid62728 00:19:02.269 Removing: /var/run/dpdk/spdk_pid62845 00:19:02.269 Removing: /var/run/dpdk/spdk_pid62913 00:19:02.269 Removing: /var/run/dpdk/spdk_pid64194 00:19:02.269 Removing: /var/run/dpdk/spdk_pid64406 00:19:02.269 Removing: /var/run/dpdk/spdk_pid64725 00:19:02.269 Removing: /var/run/dpdk/spdk_pid64840 00:19:02.269 Removing: /var/run/dpdk/spdk_pid64968 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65001 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65023 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65051 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65148 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65282 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65432 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65507 00:19:02.269 Removing: /var/run/dpdk/spdk_pid65904 00:19:02.269 Removing: /var/run/dpdk/spdk_pid66251 00:19:02.269 Removing: /var/run/dpdk/spdk_pid66259 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68475 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68477 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68761 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68775 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68789 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68825 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68830 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68914 00:19:02.269 Removing: /var/run/dpdk/spdk_pid68922 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69030 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69032 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69140 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69147 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69559 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69602 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69711 00:19:02.269 Removing: /var/run/dpdk/spdk_pid69790 00:19:02.269 Removing: /var/run/dpdk/spdk_pid70105 00:19:02.269 Removing: /var/run/dpdk/spdk_pid70308 00:19:02.269 Removing: /var/run/dpdk/spdk_pid70685 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71219 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71659 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71715 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71768 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71816 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71937 00:19:02.269 Removing: /var/run/dpdk/spdk_pid71997 00:19:02.269 Removing: /var/run/dpdk/spdk_pid72052 00:19:02.270 Removing: /var/run/dpdk/spdk_pid72112 00:19:02.270 Removing: /var/run/dpdk/spdk_pid72453 00:19:02.270 Removing: /var/run/dpdk/spdk_pid73638 00:19:02.270 Removing: /var/run/dpdk/spdk_pid73781 00:19:02.270 Removing: /var/run/dpdk/spdk_pid74029 00:19:02.270 Removing: /var/run/dpdk/spdk_pid74591 00:19:02.270 Removing: /var/run/dpdk/spdk_pid74755 00:19:02.270 Removing: /var/run/dpdk/spdk_pid74914 00:19:02.270 Removing: /var/run/dpdk/spdk_pid75011 00:19:02.270 Removing: /var/run/dpdk/spdk_pid75186 00:19:02.270 Removing: /var/run/dpdk/spdk_pid75295 00:19:02.270 Removing: /var/run/dpdk/spdk_pid75955 00:19:02.270 Removing: /var/run/dpdk/spdk_pid75990 00:19:02.270 Removing: /var/run/dpdk/spdk_pid76025 00:19:02.270 Removing: /var/run/dpdk/spdk_pid76275 00:19:02.270 Removing: /var/run/dpdk/spdk_pid76306 00:19:02.270 Removing: /var/run/dpdk/spdk_pid76341 00:19:02.270 Clean 00:19:02.529 killing process with pid 48036 00:19:02.529 killing process with pid 48040 00:19:02.529 18:17:20 -- common/autotest_common.sh@1446 -- # return 0 00:19:02.529 18:17:20 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:19:02.529 18:17:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.529 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.529 18:17:20 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:19:02.529 18:17:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:02.529 18:17:20 -- common/autotest_common.sh@10 -- # set +x 00:19:02.529 18:17:21 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:02.529 18:17:21 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:02.529 18:17:21 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:02.529 18:17:21 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:19:02.529 18:17:21 -- spdk/autotest.sh@383 -- # hostname 00:19:02.529 18:17:21 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:02.787 geninfo: WARNING: invalid characters removed from testname! 00:19:29.327 18:17:43 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:29.327 18:17:47 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:31.858 18:17:49 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:33.761 18:17:52 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:36.291 18:17:54 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:38.853 18:17:57 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:41.388 18:17:59 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:41.388 18:17:59 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:19:41.388 18:17:59 -- common/autotest_common.sh@1690 -- $ lcov --version 00:19:41.388 18:17:59 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:19:41.388 18:17:59 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:19:41.388 18:17:59 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:19:41.388 18:17:59 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:19:41.388 18:17:59 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:19:41.388 18:17:59 -- scripts/common.sh@335 -- $ IFS=.-: 00:19:41.388 18:17:59 -- scripts/common.sh@335 -- $ read -ra ver1 00:19:41.388 18:17:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:19:41.388 18:17:59 -- scripts/common.sh@336 -- $ read -ra ver2 00:19:41.388 18:17:59 -- scripts/common.sh@337 -- $ local 'op=<' 00:19:41.388 18:17:59 -- scripts/common.sh@339 -- $ ver1_l=2 00:19:41.388 18:17:59 -- scripts/common.sh@340 -- $ ver2_l=1 00:19:41.388 18:17:59 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:19:41.388 18:17:59 -- scripts/common.sh@343 -- $ case "$op" in 00:19:41.388 18:17:59 -- scripts/common.sh@344 -- $ : 1 00:19:41.388 18:17:59 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:19:41.388 18:17:59 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.388 18:17:59 -- scripts/common.sh@364 -- $ decimal 1 00:19:41.388 18:17:59 -- scripts/common.sh@352 -- $ local d=1 00:19:41.388 18:17:59 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:19:41.388 18:17:59 -- scripts/common.sh@354 -- $ echo 1 00:19:41.388 18:17:59 -- scripts/common.sh@364 -- $ ver1[v]=1 00:19:41.388 18:17:59 -- scripts/common.sh@365 -- $ decimal 2 00:19:41.388 18:17:59 -- scripts/common.sh@352 -- $ local d=2 00:19:41.388 18:17:59 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:19:41.388 18:17:59 -- scripts/common.sh@354 -- $ echo 2 00:19:41.388 18:17:59 -- scripts/common.sh@365 -- $ ver2[v]=2 00:19:41.388 18:17:59 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:19:41.388 18:17:59 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:19:41.388 18:17:59 -- scripts/common.sh@367 -- $ return 0 00:19:41.388 18:17:59 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.388 18:17:59 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:19:41.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.388 --rc genhtml_branch_coverage=1 00:19:41.388 --rc genhtml_function_coverage=1 00:19:41.388 --rc genhtml_legend=1 00:19:41.388 --rc geninfo_all_blocks=1 00:19:41.388 --rc geninfo_unexecuted_blocks=1 00:19:41.388 00:19:41.388 ' 00:19:41.388 18:17:59 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:19:41.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.388 --rc genhtml_branch_coverage=1 00:19:41.388 --rc genhtml_function_coverage=1 00:19:41.388 --rc genhtml_legend=1 00:19:41.388 --rc geninfo_all_blocks=1 00:19:41.388 --rc geninfo_unexecuted_blocks=1 00:19:41.388 00:19:41.388 ' 00:19:41.388 18:17:59 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:19:41.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.388 --rc genhtml_branch_coverage=1 00:19:41.388 --rc genhtml_function_coverage=1 00:19:41.388 --rc genhtml_legend=1 00:19:41.388 --rc geninfo_all_blocks=1 00:19:41.388 --rc geninfo_unexecuted_blocks=1 00:19:41.388 00:19:41.388 ' 00:19:41.388 18:17:59 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:19:41.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.388 --rc genhtml_branch_coverage=1 00:19:41.388 --rc genhtml_function_coverage=1 00:19:41.388 --rc genhtml_legend=1 00:19:41.388 --rc geninfo_all_blocks=1 00:19:41.388 --rc geninfo_unexecuted_blocks=1 00:19:41.388 00:19:41.388 ' 00:19:41.388 18:17:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.388 18:17:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:41.388 18:17:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.388 18:17:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.388 18:17:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.388 18:17:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.388 18:17:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.388 18:17:59 -- paths/export.sh@5 -- $ export PATH 00:19:41.388 18:17:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.388 18:17:59 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:41.388 18:17:59 -- common/autobuild_common.sh@440 -- $ date +%s 00:19:41.388 18:17:59 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731953879.XXXXXX 00:19:41.388 18:17:59 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731953879.sd5snr 00:19:41.388 18:17:59 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:19:41.388 18:17:59 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:19:41.388 18:17:59 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:41.388 18:17:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:41.388 18:17:59 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:41.388 18:17:59 -- common/autobuild_common.sh@456 -- $ get_config_params 00:19:41.388 18:17:59 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:19:41.388 18:17:59 -- common/autotest_common.sh@10 -- $ set +x 00:19:41.388 18:17:59 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:19:41.388 18:17:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:41.388 18:17:59 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:41.388 18:17:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:41.388 18:17:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:19:41.388 18:17:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:41.388 18:17:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:41.388 18:17:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:41.388 18:17:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:41.388 18:17:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:41.388 18:17:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:41.388 + [[ -n 5231 ]] 00:19:41.389 + sudo kill 5231 00:19:41.398 [Pipeline] } 00:19:41.413 [Pipeline] // timeout 00:19:41.419 [Pipeline] } 00:19:41.433 [Pipeline] // stage 00:19:41.439 [Pipeline] } 00:19:41.453 [Pipeline] // catchError 00:19:41.462 [Pipeline] stage 00:19:41.464 [Pipeline] { (Stop VM) 00:19:41.475 [Pipeline] sh 00:19:41.754 + vagrant halt 00:19:45.945 ==> default: Halting domain... 00:19:51.234 [Pipeline] sh 00:19:51.513 + vagrant destroy -f 00:19:54.831 ==> default: Removing domain... 00:19:54.843 [Pipeline] sh 00:19:55.125 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:19:55.134 [Pipeline] } 00:19:55.153 [Pipeline] // stage 00:19:55.158 [Pipeline] } 00:19:55.171 [Pipeline] // dir 00:19:55.176 [Pipeline] } 00:19:55.190 [Pipeline] // wrap 00:19:55.196 [Pipeline] } 00:19:55.210 [Pipeline] // catchError 00:19:55.219 [Pipeline] stage 00:19:55.222 [Pipeline] { (Epilogue) 00:19:55.235 [Pipeline] sh 00:19:55.518 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:00.803 [Pipeline] catchError 00:20:00.805 [Pipeline] { 00:20:00.816 [Pipeline] sh 00:20:03.356 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:03.356 Artifacts sizes are good 00:20:03.364 [Pipeline] } 00:20:03.378 [Pipeline] // catchError 00:20:03.389 [Pipeline] archiveArtifacts 00:20:03.396 Archiving artifacts 00:20:03.511 [Pipeline] cleanWs 00:20:03.523 [WS-CLEANUP] Deleting project workspace... 00:20:03.523 [WS-CLEANUP] Deferred wipeout is used... 00:20:03.529 [WS-CLEANUP] done 00:20:03.531 [Pipeline] } 00:20:03.546 [Pipeline] // stage 00:20:03.551 [Pipeline] } 00:20:03.565 [Pipeline] // node 00:20:03.571 [Pipeline] End of Pipeline 00:20:03.611 Finished: SUCCESS